[jira] [Commented] (HBASE-4397) -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time
[ https://issues.apache.org/jira/browse/HBASE-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190616#comment-13190616 ] Hudson commented on HBASE-4397: --- Integrated in HBase-TRUNK-security #84 (See [https://builds.apache.org/job/HBase-TRUNK-security/84/]) HBASE-5237 Addendum for HBASE-5160 and HBASE-4397 (Ram) ramkrishna : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time - Key: HBASE-4397 URL: https://issues.apache.org/jira/browse/HBASE-4397 Project: HBase Issue Type: Bug Reporter: Ming Ma Assignee: Ming Ma Fix For: 0.92.0, 0.94.0 Attachments: HBASE-4397-0.92.patch 1. Shutdown all RSs. 2. Bring all RS back online. The -ROOT-, .META. stay in offline state until timeout monitor force assignment 30 minutes later. That is because HMaster can't find a RS to assign the tables to in assign operation. 011-09-13 13:25:52,743 WARN org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to sea-lab-4,60020,1315870341387, trying to assign elsewhere instead; retry=0 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:345) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1002) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:854) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:148) at $Proxy9.openRegion(Unknown Source) at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:407) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1408) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1153) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1128) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1123) at org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:1788) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:100) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRootWithRetries(ServerShutdownHandler.java:118) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:181) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:167) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2011-09-13 13:25:52,743 WARN org.apache.hadoop.hbase.master.AssignmentManager: Unable to find a viable location to assign region -ROOT-,,0.70236052 Possible fixes: 1. Have serverManager handle server online event similar to how RegionServerTracker.java calls servermanager.expireServer in the case server goes down. 2. Make timeoutMonitor handle the situation better. This is a special situation in the cluster. 30 minutes timeout can be skipped. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
[ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190714#comment-13190714 ] Hudson commented on HBASE-5235: --- Integrated in HBase-TRUNK #2644 (See [https://builds.apache.org/job/HBase-TRUNK/2644/]) HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.(Ram) ramkrishna : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. -- Key: HBASE-5235 URL: https://issues.apache.org/jira/browse/HBASE-5235 Project: HBase Issue Type: Bug Affects Versions: 0.92.0, 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.92.1, 0.90.6 Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch Pls find the analysis. Correct me if am wrong {code} 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026) {code} Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable {code} private void writerThreadError(Throwable t) { thrown.compareAndSet(null, t); } {code} In the finally block of splitLog we try to close the streams. {code} for (WriterThread t: writerThreads) { try { t.join(); } catch (InterruptedException ie) { throw new IOException(ie); } checkForErrors(); } LOG.info(Split writers finished); return closeStreams(); {code} Inside checkForErrors {code} private void checkForErrors() throws IOException { Throwable thrown = this.thrown.get(); if (thrown == null) return; if (thrown instanceof IOException) { throw (IOException)thrown; } else { throw new RuntimeException(thrown); } } So once we throw the exception the DFSStreamer threads are not getting closed. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5246) Regenerate code with thrift 0.8.0
[ https://issues.apache.org/jira/browse/HBASE-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190713#comment-13190713 ] Hudson commented on HBASE-5246: --- Integrated in HBase-TRUNK #2644 (See [https://builds.apache.org/job/HBase-TRUNK/2644/]) HBASE-5246 Regenerate code with thrift 0.8.0 (Scott Chen) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumn.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDeleteType.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java Regenerate code with thrift 0.8.0 - Key: HBASE-5246 URL: https://issues.apache.org/jira/browse/HBASE-5246 Project: HBase Issue Type: Improvement Components: regionserver Reporter: Scott Chen Assignee: Scott Chen Priority: Minor Fix For: 0.94.0 Attachments: HBASE-5246.D1371.1.patch In HBASE-5201, We have upgrated libthrift.jar to 0.8.0. The source codes generated from *.thrift should be upgraded also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5254) [book] book.xml - Arch/Regions added link to Troubleshooting for hbase objects
[ https://issues.apache.org/jira/browse/HBASE-5254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190885#comment-13190885 ] Hudson commented on HBASE-5254: --- Integrated in HBase-TRUNK-security #85 (See [https://builds.apache.org/job/HBase-TRUNK-security/85/]) hbase-5254 book.xml. Arch/Regions, added link to troubleshooting section on hbase objects on HDFS [book] book.xml - Arch/Regions added link to Troubleshooting for hbase objects -- Key: HBASE-5254 URL: https://issues.apache.org/jira/browse/HBASE-5254 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Trivial Attachments: book_hbase_5254.xml.patch book.xml * in Arch/Regions under the object heirarchy chart I just added, I added a link to the troubleshooting section where it shows what the HBase objects look like when written to HDFS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5245) HBase shell should use alternate jruby if JRUBY_HOME is set, should pass along JRUBY_OPTS
[ https://issues.apache.org/jira/browse/HBASE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190884#comment-13190884 ] Hudson commented on HBASE-5245: --- Integrated in HBase-TRUNK-security #85 (See [https://builds.apache.org/job/HBase-TRUNK-security/85/]) HBASE-5245 HBase shell should use alternate jruby if JRUBY_HOME is set, should pass along JRUBY_OPTS stack : Files : * /hbase/trunk/bin/hbase HBase shell should use alternate jruby if JRUBY_HOME is set, should pass along JRUBY_OPTS - Key: HBASE-5245 URL: https://issues.apache.org/jira/browse/HBASE-5245 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.90.4 Reporter: Philip (flip) Kromer Priority: Minor Attachments: 5245-v2.txt, hbase-jruby_home-and-jruby_opts.patch Original Estimate: 0h Remaining Estimate: 0h Invoking {{hbase shell}}, the hbase runner launches the jruby jar directly, and so behaves differently than the traditional jruby runner. Specifically, it * does not respect the {{JRUBY_OPTS}} environment variable (among other things, I cannot launch the shell to use ruby-1.9 mode) * does not respect the {{JRUBY_HOME}} environment variable (placing things in an inconsistent state if my classpath holds the system jruby). This patch allows you to use an alternative jruby and to specify options to the jruby jar. * When the command is 'shell', adds {{$JRUBY_OPTS}} to the CLASS * When the command is 'shell' and {{$JRUBY_HOME}} is set, adds {{$JRUBY_HOME/lib/jruby.jar}} to the classpath, and sets {{-Djruby.home}} and {{-Djruby.job}} config variables. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5252) [book] book.xml - added section in Data Model about joins (and the lack thereof)
[ https://issues.apache.org/jira/browse/HBASE-5252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190887#comment-13190887 ] Hudson commented on HBASE-5252: --- Integrated in HBase-TRUNK-security #85 (See [https://builds.apache.org/job/HBase-TRUNK-security/85/]) hbase-5252. book.xml, added section in Data Model about joins [book] book.xml - added section in Data Model about joins (and the lack thereof) Key: HBASE-5252 URL: https://issues.apache.org/jira/browse/HBASE-5252 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: book_hbase_5252.xml.patch book.xml * Added section in Data Model for Joins and that HBase doesn't support them out of the box and you have to do them yourself. * Also added link from Schema Design to this new section. This is a common question in the dist-list. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
[ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190883#comment-13190883 ] Hudson commented on HBASE-5235: --- Integrated in HBase-TRUNK-security #85 (See [https://builds.apache.org/job/HBase-TRUNK-security/85/]) HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.(Ram) ramkrishna : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. -- Key: HBASE-5235 URL: https://issues.apache.org/jira/browse/HBASE-5235 Project: HBase Issue Type: Bug Affects Versions: 0.92.0, 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.92.1, 0.90.6 Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch Pls find the analysis. Correct me if am wrong {code} 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026) {code} Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable {code} private void writerThreadError(Throwable t) { thrown.compareAndSet(null, t); } {code} In the finally block of splitLog we try to close the streams. {code} for (WriterThread t: writerThreads) { try { t.join(); } catch (InterruptedException ie) { throw new IOException(ie); } checkForErrors(); } LOG.info(Split writers finished); return closeStreams(); {code} Inside checkForErrors {code} private void checkForErrors() throws IOException { Throwable thrown = this.thrown.get(); if (thrown == null) return; if (thrown instanceof IOException) { throw (IOException)thrown; } else { throw new RuntimeException(thrown); } } So once we throw the exception the DFSStreamer threads are not getting closed. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5243) LogSyncerThread not getting shutdown waiting for the interrupted flag
[ https://issues.apache.org/jira/browse/HBASE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190886#comment-13190886 ] Hudson commented on HBASE-5243: --- Integrated in HBase-TRUNK-security #85 (See [https://builds.apache.org/job/HBase-TRUNK-security/85/]) HBASE-5243 LogSyncerThread not getting shutdown waiting for the interrupted flag (Ram) ramkrishna : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java LogSyncerThread not getting shutdown waiting for the interrupted flag - Key: HBASE-5243 URL: https://issues.apache.org/jira/browse/HBASE-5243 Project: HBase Issue Type: Bug Affects Versions: 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.92.1, 0.90.6 Attachments: HBASE-5243_0.90.patch, HBASE-5243_0.90_1.patch, HBASE-5243_trunk.patch In the LogSyncer run() we keep looping till this.isInterrupted flag is set. But in some cases the DFSclient is consuming the Interrupted exception. So we are running into infinite loop in some shutdown cases. I would suggest that as we are the ones who tries to close down the LogSyncerThread we can introduce a variable like Close or shutdown and based on the state of this flag along with isInterrupted() we can make the thread stop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5253) [book] book.xml - Arch/Regions - adding chart showing object heirarchy
[ https://issues.apache.org/jira/browse/HBASE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13190888#comment-13190888 ] Hudson commented on HBASE-5253: --- Integrated in HBase-TRUNK-security #85 (See [https://builds.apache.org/job/HBase-TRUNK-security/85/]) hbase-5253 book.xml - adding chart of object heirarchy in Arch/Regions [book] book.xml - Arch/Regions - adding chart showing object heirarchy -- Key: HBASE-5253 URL: https://issues.apache.org/jira/browse/HBASE-5253 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: book_hbase_5253.xml.patch book.xml * adding word-chart showing object heirarchy of Regions in the beginning of that section in the Arch chapter. The description up until this point was entirely prose and it needs even a simple picture. * also minor thing, adding that compression happens at block level within StoreFiles (block sub-section). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4397) -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time
[ https://issues.apache.org/jira/browse/HBASE-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191363#comment-13191363 ] Hudson commented on HBASE-4397: --- Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/]) HBASE-5237 Addendum for HBASE-5160 and HBASE-4397(Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time - Key: HBASE-4397 URL: https://issues.apache.org/jira/browse/HBASE-4397 Project: HBase Issue Type: Bug Reporter: Ming Ma Assignee: Ming Ma Fix For: 0.94.0, 0.92.0 Attachments: HBASE-4397-0.92.patch 1. Shutdown all RSs. 2. Bring all RS back online. The -ROOT-, .META. stay in offline state until timeout monitor force assignment 30 minutes later. That is because HMaster can't find a RS to assign the tables to in assign operation. 011-09-13 13:25:52,743 WARN org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to sea-lab-4,60020,1315870341387, trying to assign elsewhere instead; retry=0 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:345) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1002) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:854) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:148) at $Proxy9.openRegion(Unknown Source) at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:407) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1408) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1153) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1128) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1123) at org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:1788) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:100) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRootWithRetries(ServerShutdownHandler.java:118) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:181) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:167) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2011-09-13 13:25:52,743 WARN org.apache.hadoop.hbase.master.AssignmentManager: Unable to find a viable location to assign region -ROOT-,,0.70236052 Possible fixes: 1. Have serverManager handle server online event similar to how RegionServerTracker.java calls servermanager.expireServer in the case server goes down. 2. Make timeoutMonitor handle the situation better. This is a special situation in the cluster. 30 minutes timeout can be skipped. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5243) LogSyncerThread not getting shutdown waiting for the interrupted flag
[ https://issues.apache.org/jira/browse/HBASE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191360#comment-13191360 ] Hudson commented on HBASE-5243: --- Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/]) HBASE-5243 LogSyncerThread not getting shutdown waiting for the interrupted flag(Ram). ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java LogSyncerThread not getting shutdown waiting for the interrupted flag - Key: HBASE-5243 URL: https://issues.apache.org/jira/browse/HBASE-5243 Project: HBase Issue Type: Bug Affects Versions: 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: HBASE-5243_0.90.patch, HBASE-5243_0.90_1.patch, HBASE-5243_trunk.patch In the LogSyncer run() we keep looping till this.isInterrupted flag is set. But in some cases the DFSclient is consuming the Interrupted exception. So we are running into infinite loop in some shutdown cases. I would suggest that as we are the ones who tries to close down the LogSyncerThread we can introduce a variable like Close or shutdown and based on the state of this flag along with isInterrupted() we can make the thread stop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5160) Backport HBASE-4397 - -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time
[ https://issues.apache.org/jira/browse/HBASE-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191362#comment-13191362 ] Hudson commented on HBASE-5160: --- Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/]) HBASE-5237 Addendum for HBASE-5160 and HBASE-4397(Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java Backport HBASE-4397 - -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time --- Key: HBASE-5160 URL: https://issues.apache.org/jira/browse/HBASE-5160 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Fix For: 0.90.6 Attachments: HBASE-5160-AssignmentManager.patch, HBASE-5160_2.patch Backporting to 0.90.6 considering the importance of the issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
[ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191359#comment-13191359 ] Hudson commented on HBASE-5235: --- Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/]) HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. (Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. -- Key: HBASE-5235 URL: https://issues.apache.org/jira/browse/HBASE-5235 Project: HBase Issue Type: Bug Affects Versions: 0.90.5, 0.92.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch Pls find the analysis. Correct me if am wrong {code} 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026) {code} Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable {code} private void writerThreadError(Throwable t) { thrown.compareAndSet(null, t); } {code} In the finally block of splitLog we try to close the streams. {code} for (WriterThread t: writerThreads) { try { t.join(); } catch (InterruptedException ie) { throw new IOException(ie); } checkForErrors(); } LOG.info(Split writers finished); return closeStreams(); {code} Inside checkForErrors {code} private void checkForErrors() throws IOException { Throwable thrown = this.thrown.get(); if (thrown == null) return; if (thrown instanceof IOException) { throw (IOException)thrown; } else { throw new RuntimeException(thrown); } } So once we throw the exception the DFSStreamer threads are not getting closed. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5237) Addendum for HBASE-5160 and HBASE-4397
[ https://issues.apache.org/jira/browse/HBASE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191361#comment-13191361 ] Hudson commented on HBASE-5237: --- Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/]) HBASE-5237 Addendum for HBASE-5160 and HBASE-4397(Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java Addendum for HBASE-5160 and HBASE-4397 -- Key: HBASE-5237 URL: https://issues.apache.org/jira/browse/HBASE-5237 Project: HBase Issue Type: Bug Affects Versions: 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: HBASE-5237_0.90.patch, HBASE-5237_trunk.patch As part of HBASE-4397 there is one more scenario where the patch has to be applied. {code} RegionPlan plan = getRegionPlan(state, forceNewPlan); if (plan == null) { debugLog(state.getRegion(), Unable to determine a plan to assign + state); return; // Should get reassigned later when RIT times out. } {code} I think in this scenario also {code} this.timeoutMonitor.setAllRegionServersOffline(true); {code} this should be done. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5231) Backport HBASE-3373 (per-table load balancing) to 0.92
[ https://issues.apache.org/jira/browse/HBASE-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191365#comment-13191365 ] Hudson commented on HBASE-5231: --- Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/]) HBASE-5231 Backport HBASE-3373 (per-table load balancing) to 0.92 tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java Backport HBASE-3373 (per-table load balancing) to 0.92 -- Key: HBASE-5231 URL: https://issues.apache.org/jira/browse/HBASE-5231 Project: HBase Issue Type: Improvement Reporter: Zhihong Yu Fix For: 0.92.1 Attachments: 5231-v2.txt, 5231.txt This JIRA backports per-table load balancing to 0.90 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3373) Allow regions to be load-balanced by table
[ https://issues.apache.org/jira/browse/HBASE-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191364#comment-13191364 ] Hudson commented on HBASE-3373: --- Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/]) HBASE-5231 Backport HBASE-3373 (per-table load balancing) to 0.92 tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java Allow regions to be load-balanced by table -- Key: HBASE-3373 URL: https://issues.apache.org/jira/browse/HBASE-3373 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.20.6 Reporter: Ted Yu Assignee: Zhihong Yu Fix For: 0.94.0 Attachments: 3373.txt, HbaseBalancerTest2.java From our experience, cluster can be well balanced and yet, one table's regions may be badly concentrated on few region servers. For example, one table has 839 regions (380 regions at time of table creation) out of which 202 are on one server. It would be desirable for load balancer to distribute regions for specified tables evenly across the cluster. Each of such tables has number of regions many times the cluster size. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5243) LogSyncerThread not getting shutdown waiting for the interrupted flag
[ https://issues.apache.org/jira/browse/HBASE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191638#comment-13191638 ] Hudson commented on HBASE-5243: --- Integrated in HBase-0.92 #258 (See [https://builds.apache.org/job/HBase-0.92/258/]) HBASE-5243 Addendum moves the close() method to right place tedyu : Files : * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java LogSyncerThread not getting shutdown waiting for the interrupted flag - Key: HBASE-5243 URL: https://issues.apache.org/jira/browse/HBASE-5243 Project: HBase Issue Type: Bug Affects Versions: 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: 5243-92.addendum, HBASE-5243_0.90.patch, HBASE-5243_0.90_1.patch, HBASE-5243_trunk.patch In the LogSyncer run() we keep looping till this.isInterrupted flag is set. But in some cases the DFSclient is consuming the Interrupted exception. So we are running into infinite loop in some shutdown cases. I would suggest that as we are the ones who tries to close down the LogSyncerThread we can introduce a variable like Close or shutdown and based on the state of this flag along with isInterrupted() we can make the thread stop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5255) Use singletons for OperationStatus to save memory
[ https://issues.apache.org/jira/browse/HBASE-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191833#comment-13191833 ] Hudson commented on HBASE-5255: --- Integrated in HBase-0.92 #259 (See [https://builds.apache.org/job/HBase-0.92/259/]) HBASE-5255 Use singletons for OperationStatus to save memory (Benoit) tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/OperationStatus.java Use singletons for OperationStatus to save memory - Key: HBASE-5255 URL: https://issues.apache.org/jira/browse/HBASE-5255 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.90.5, 0.92.0 Reporter: Benoit Sigoure Assignee: Benoit Sigoure Priority: Minor Labels: performance Fix For: 0.94.0, 0.92.1 Attachments: 5255-92.txt, 5255-v2.txt, HBASE-5255-0.92-Use-singletons-to-remove-unnecessary-memory-allocati.patch, HBASE-5255-trunk-Use-singletons-to-remove-unnecessary-memory-allocati.patch Every single {{Put}} causes the allocation of at least one {{OperationStatus}}, yet {{OperationStatus}} is almost always stateless, so these allocations are unnecessary and could be avoided. Attached patch adds a few singletons and uses them, with no public API change. I didn't test the patches, but you get the idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5255) Use singletons for OperationStatus to save memory
[ https://issues.apache.org/jira/browse/HBASE-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191877#comment-13191877 ] Hudson commented on HBASE-5255: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) HBASE-5255 Use singletons for OperationStatus to save memory (Benoit) tedyu : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/OperationStatus.java Use singletons for OperationStatus to save memory - Key: HBASE-5255 URL: https://issues.apache.org/jira/browse/HBASE-5255 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.90.5, 0.92.0 Reporter: Benoit Sigoure Assignee: Benoit Sigoure Priority: Minor Labels: performance Fix For: 0.94.0, 0.92.1 Attachments: 5255-92.txt, 5255-v2.txt, HBASE-5255-0.92-Use-singletons-to-remove-unnecessary-memory-allocati.patch, HBASE-5255-trunk-Use-singletons-to-remove-unnecessary-memory-allocati.patch Every single {{Put}} causes the allocation of at least one {{OperationStatus}}, yet {{OperationStatus}} is almost always stateless, so these allocations are unnecessary and could be avoided. Attached patch adds a few singletons and uses them, with no public API change. I didn't test the patches, but you get the idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5254) [book] book.xml - Arch/Regions added link to Troubleshooting for hbase objects
[ https://issues.apache.org/jira/browse/HBASE-5254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191879#comment-13191879 ] Hudson commented on HBASE-5254: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) hbase-5254 book.xml. Arch/Regions, added link to troubleshooting section on hbase objects on HDFS [book] book.xml - Arch/Regions added link to Troubleshooting for hbase objects -- Key: HBASE-5254 URL: https://issues.apache.org/jira/browse/HBASE-5254 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Trivial Attachments: book_hbase_5254.xml.patch book.xml * in Arch/Regions under the object heirarchy chart I just added, I added a link to the troubleshooting section where it shows what the HBase objects look like when written to HDFS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5264) Add 0.92.0 upgrade guide
[ https://issues.apache.org/jira/browse/HBASE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191881#comment-13191881 ] Hudson commented on HBASE-5264: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) HBASE-5264 Add 0.92.0 upgrade guide stack : Files : * /hbase/trunk/src/docbkx/performance.xml * /hbase/trunk/src/docbkx/upgrading.xml * /hbase/trunk/src/main/resources/hbase-webapps/static/favicon.ico * /hbase/trunk/src/site/resources/images/favicon.ico Add 0.92.0 upgrade guide Key: HBASE-5264 URL: https://issues.apache.org/jira/browse/HBASE-5264 Project: HBase Issue Type: Task Reporter: stack Fix For: 0.94.0 Attachments: 5264.txt Add an upgrade guide for going from 0.90 to 0.92. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5245) HBase shell should use alternate jruby if JRUBY_HOME is set, should pass along JRUBY_OPTS
[ https://issues.apache.org/jira/browse/HBASE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191878#comment-13191878 ] Hudson commented on HBASE-5245: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) HBASE-5245 HBase shell should use alternate jruby if JRUBY_HOME is set, should pass along JRUBY_OPTS stack : Files : * /hbase/trunk/bin/hbase HBase shell should use alternate jruby if JRUBY_HOME is set, should pass along JRUBY_OPTS - Key: HBASE-5245 URL: https://issues.apache.org/jira/browse/HBASE-5245 Project: HBase Issue Type: Improvement Components: shell Affects Versions: 0.90.4 Reporter: Philip (flip) Kromer Priority: Minor Attachments: 5245-v2.txt, hbase-jruby_home-and-jruby_opts.patch Original Estimate: 0h Remaining Estimate: 0h Invoking {{hbase shell}}, the hbase runner launches the jruby jar directly, and so behaves differently than the traditional jruby runner. Specifically, it * does not respect the {{JRUBY_OPTS}} environment variable (among other things, I cannot launch the shell to use ruby-1.9 mode) * does not respect the {{JRUBY_HOME}} environment variable (placing things in an inconsistent state if my classpath holds the system jruby). This patch allows you to use an alternative jruby and to specify options to the jruby jar. * When the command is 'shell', adds {{$JRUBY_OPTS}} to the CLASS * When the command is 'shell' and {{$JRUBY_HOME}} is set, adds {{$JRUBY_HOME/lib/jruby.jar}} to the classpath, and sets {{-Djruby.home}} and {{-Djruby.job}} config variables. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5243) LogSyncerThread not getting shutdown waiting for the interrupted flag
[ https://issues.apache.org/jira/browse/HBASE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191880#comment-13191880 ] Hudson commented on HBASE-5243: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) HBASE-5243 LogSyncerThread not getting shutdown waiting for the interrupted flag (Ram) ramkrishna : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java LogSyncerThread not getting shutdown waiting for the interrupted flag - Key: HBASE-5243 URL: https://issues.apache.org/jira/browse/HBASE-5243 Project: HBase Issue Type: Bug Affects Versions: 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: 5243-92.addendum, HBASE-5243_0.90.patch, HBASE-5243_0.90_1.patch, HBASE-5243_trunk.patch In the LogSyncer run() we keep looping till this.isInterrupted flag is set. But in some cases the DFSclient is consuming the Interrupted exception. So we are running into infinite loop in some shutdown cases. I would suggest that as we are the ones who tries to close down the LogSyncerThread we can introduce a variable like Close or shutdown and based on the state of this flag along with isInterrupted() we can make the thread stop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4117) Slow Query Log
[ https://issues.apache.org/jira/browse/HBASE-4117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191882#comment-13191882 ] Hudson commented on HBASE-4117: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) Add Riley's slow query doc from hbase-4117 Slow Query Log -- Key: HBASE-4117 URL: https://issues.apache.org/jira/browse/HBASE-4117 Project: HBase Issue Type: New Feature Components: ipc Reporter: Riley Patterson Assignee: Riley Patterson Priority: Minor Labels: client, ipc Fix For: 0.92.0 Attachments: HBASE-4117-doc.txt, HBASE-4117-v2.patch, HBASE-4117-v3.patch, HBASE-4117.patch Produce log messages for slow queries. The RPC server will decide what is slow based on a configurable warn response time parameter. Queries designated as slow will then output a response too slow message followed by a fingerprint of the query, and a summary limited in size by another configurable parameter (to limit log spamming). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5139) Compute (weighted) median using AggregateProtocol
[ https://issues.apache.org/jira/browse/HBASE-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191886#comment-13191886 ] Hudson commented on HBASE-5139: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) HBASE-5139 Addendum handles startRow being null for the case where median is in the first region tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.java Compute (weighted) median using AggregateProtocol - Key: HBASE-5139 URL: https://issues.apache.org/jira/browse/HBASE-5139 Project: HBase Issue Type: Sub-task Reporter: Zhihong Yu Assignee: Zhihong Yu Attachments: 5139-v2.txt, 5139.addendum Suppose cf:cq1 stores numeric values and optionally cf:cq2 stores weights. This task finds out the median value among the values of cf:cq1 (See http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/R.basic/html/weighted.median.html) This can be done in two passes. The first pass utilizes AggregateProtocol where the following tuple is returned from each region: (partial-sum-of-values, partial-sum-of-weights) The start rowkey (supplied by coprocessor framework) would be used to sort the tuples. This way we can determine which region (called R) contains the (weighted) median. partial-sum-of-weights can be 0 if unweighted median is sought The second pass involves scanning the table, beginning with startrow of region R and computing partial (weighted) sum until the threshold of S/2 is crossed. The (weighted) median is returned. However, this approach wouldn't work if there is mutation in the underlying table between pass one and pass two. In that case, sequential scanning seems to be the solution which is slower than the above approach. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5260) [book] troubleshooting.xml - Troubleshooting/Network/Loopback IP using incorrect XML element to config entry
[ https://issues.apache.org/jira/browse/HBASE-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191883#comment-13191883 ] Hudson commented on HBASE-5260: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) hbase-5260. troubleshooting.xml - fixed incorrect XML tag [book] troubleshooting.xml - Troubleshooting/Network/Loopback IP using incorrect XML element to config entry Key: HBASE-5260 URL: https://issues.apache.org/jira/browse/HBASE-5260 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Trivial Attachments: troubleshooting_hbase_5260.xml.patch troubleshooting.xml * the Troubleshooting/Network/Loopback IP entry is using the incorrect XML element to link to the Config section. It's using link instead of an xref, so the description is ??? Oddly enough, though, the link actually works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5252) [book] book.xml - added section in Data Model about joins (and the lack thereof)
[ https://issues.apache.org/jira/browse/HBASE-5252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191884#comment-13191884 ] Hudson commented on HBASE-5252: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) hbase-5252. book.xml, added section in Data Model about joins [book] book.xml - added section in Data Model about joins (and the lack thereof) Key: HBASE-5252 URL: https://issues.apache.org/jira/browse/HBASE-5252 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: book_hbase_5252.xml.patch book.xml * Added section in Data Model for Joins and that HBase doesn't support them out of the box and you have to do them yourself. * Also added link from Schema Design to this new section. This is a common question in the dist-list. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5253) [book] book.xml - Arch/Regions - adding chart showing object heirarchy
[ https://issues.apache.org/jira/browse/HBASE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191885#comment-13191885 ] Hudson commented on HBASE-5253: --- Integrated in HBase-TRUNK #2645 (See [https://builds.apache.org/job/HBase-TRUNK/2645/]) hbase-5253 book.xml - adding chart of object heirarchy in Arch/Regions [book] book.xml - Arch/Regions - adding chart showing object heirarchy -- Key: HBASE-5253 URL: https://issues.apache.org/jira/browse/HBASE-5253 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: book_hbase_5253.xml.patch book.xml * adding word-chart showing object heirarchy of Regions in the beginning of that section in the Arch chapter. The description up until this point was entirely prose and it needs even a simple picture. * also minor thing, adding that compression happens at block level within StoreFiles (block sub-section). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5255) Use singletons for OperationStatus to save memory
[ https://issues.apache.org/jira/browse/HBASE-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191937#comment-13191937 ] Hudson commented on HBASE-5255: --- Integrated in HBase-TRUNK-security #86 (See [https://builds.apache.org/job/HBase-TRUNK-security/86/]) HBASE-5255 Use singletons for OperationStatus to save memory (Benoit) tedyu : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/OperationStatus.java Use singletons for OperationStatus to save memory - Key: HBASE-5255 URL: https://issues.apache.org/jira/browse/HBASE-5255 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.90.5, 0.92.0 Reporter: Benoit Sigoure Assignee: Benoit Sigoure Priority: Minor Labels: performance Fix For: 0.94.0, 0.92.1 Attachments: 5255-92.txt, 5255-v2.txt, HBASE-5255-0.92-Use-singletons-to-remove-unnecessary-memory-allocati.patch, HBASE-5255-trunk-Use-singletons-to-remove-unnecessary-memory-allocati.patch Every single {{Put}} causes the allocation of at least one {{OperationStatus}}, yet {{OperationStatus}} is almost always stateless, so these allocations are unnecessary and could be avoided. Attached patch adds a few singletons and uses them, with no public API change. I didn't test the patches, but you get the idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5264) Add 0.92.0 upgrade guide
[ https://issues.apache.org/jira/browse/HBASE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191938#comment-13191938 ] Hudson commented on HBASE-5264: --- Integrated in HBase-TRUNK-security #86 (See [https://builds.apache.org/job/HBase-TRUNK-security/86/]) HBASE-5264 Add 0.92.0 upgrade guide stack : Files : * /hbase/trunk/src/docbkx/performance.xml * /hbase/trunk/src/docbkx/upgrading.xml * /hbase/trunk/src/main/resources/hbase-webapps/static/favicon.ico * /hbase/trunk/src/site/resources/images/favicon.ico Add 0.92.0 upgrade guide Key: HBASE-5264 URL: https://issues.apache.org/jira/browse/HBASE-5264 Project: HBase Issue Type: Task Reporter: stack Fix For: 0.94.0 Attachments: 5264.txt Add an upgrade guide for going from 0.90 to 0.92. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5260) [book] troubleshooting.xml - Troubleshooting/Network/Loopback IP using incorrect XML element to config entry
[ https://issues.apache.org/jira/browse/HBASE-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191940#comment-13191940 ] Hudson commented on HBASE-5260: --- Integrated in HBase-TRUNK-security #86 (See [https://builds.apache.org/job/HBase-TRUNK-security/86/]) hbase-5260. troubleshooting.xml - fixed incorrect XML tag [book] troubleshooting.xml - Troubleshooting/Network/Loopback IP using incorrect XML element to config entry Key: HBASE-5260 URL: https://issues.apache.org/jira/browse/HBASE-5260 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Trivial Attachments: troubleshooting_hbase_5260.xml.patch troubleshooting.xml * the Troubleshooting/Network/Loopback IP entry is using the incorrect XML element to link to the Config section. It's using link instead of an xref, so the description is ??? Oddly enough, though, the link actually works. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4117) Slow Query Log
[ https://issues.apache.org/jira/browse/HBASE-4117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191939#comment-13191939 ] Hudson commented on HBASE-4117: --- Integrated in HBase-TRUNK-security #86 (See [https://builds.apache.org/job/HBase-TRUNK-security/86/]) Add Riley's slow query doc from hbase-4117 Slow Query Log -- Key: HBASE-4117 URL: https://issues.apache.org/jira/browse/HBASE-4117 Project: HBase Issue Type: New Feature Components: ipc Reporter: Riley Patterson Assignee: Riley Patterson Priority: Minor Labels: client, ipc Fix For: 0.92.0 Attachments: HBASE-4117-doc.txt, HBASE-4117-v2.patch, HBASE-4117-v3.patch, HBASE-4117.patch Produce log messages for slow queries. The RPC server will decide what is slow based on a configurable warn response time parameter. Queries designated as slow will then output a response too slow message followed by a fingerprint of the query, and a summary limited in size by another configurable parameter (to limit log spamming). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5139) Compute (weighted) median using AggregateProtocol
[ https://issues.apache.org/jira/browse/HBASE-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191941#comment-13191941 ] Hudson commented on HBASE-5139: --- Integrated in HBase-TRUNK-security #86 (See [https://builds.apache.org/job/HBase-TRUNK-security/86/]) HBASE-5139 Addendum handles startRow being null for the case where median is in the first region tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.java Compute (weighted) median using AggregateProtocol - Key: HBASE-5139 URL: https://issues.apache.org/jira/browse/HBASE-5139 Project: HBase Issue Type: Sub-task Reporter: Zhihong Yu Assignee: Zhihong Yu Attachments: 5139-v2.txt, 5139.addendum Suppose cf:cq1 stores numeric values and optionally cf:cq2 stores weights. This task finds out the median value among the values of cf:cq1 (See http://www.stat.ucl.ac.be/ISdidactique/Rhelp/library/R.basic/html/weighted.median.html) This can be done in two passes. The first pass utilizes AggregateProtocol where the following tuple is returned from each region: (partial-sum-of-values, partial-sum-of-weights) The start rowkey (supplied by coprocessor framework) would be used to sort the tuples. This way we can determine which region (called R) contains the (weighted) median. partial-sum-of-weights can be 0 if unweighted median is sought The second pass involves scanning the table, beginning with startrow of region R and computing partial (weighted) sum until the threshold of S/2 is crossed. The (weighted) median is returned. However, this approach wouldn't work if there is mutation in the underlying table between pass one and pass two. In that case, sequential scanning seems to be the solution which is slower than the above approach. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5255) Use singletons for OperationStatus to save memory
[ https://issues.apache.org/jira/browse/HBASE-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191985#comment-13191985 ] Hudson commented on HBASE-5255: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5255 Use singletons for OperationStatus to save memory (Benoit) tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/OperationStatus.java Use singletons for OperationStatus to save memory - Key: HBASE-5255 URL: https://issues.apache.org/jira/browse/HBASE-5255 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.90.5, 0.92.0 Reporter: Benoit Sigoure Assignee: Benoit Sigoure Priority: Minor Labels: performance Fix For: 0.94.0, 0.92.1 Attachments: 5255-92.txt, 5255-v2.txt, HBASE-5255-0.92-Use-singletons-to-remove-unnecessary-memory-allocati.patch, HBASE-5255-trunk-Use-singletons-to-remove-unnecessary-memory-allocati.patch Every single {{Put}} causes the allocation of at least one {{OperationStatus}}, yet {{OperationStatus}} is almost always stateless, so these allocations are unnecessary and could be avoided. Attached patch adds a few singletons and uses them, with no public API change. I didn't test the patches, but you get the idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5237) Addendum for HBASE-5160 and HBASE-4397
[ https://issues.apache.org/jira/browse/HBASE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191988#comment-13191988 ] Hudson commented on HBASE-5237: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5237 Addendum for HBASE-5160 and HBASE-4397(Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java Addendum for HBASE-5160 and HBASE-4397 -- Key: HBASE-5237 URL: https://issues.apache.org/jira/browse/HBASE-5237 Project: HBase Issue Type: Bug Affects Versions: 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: HBASE-5237_0.90.patch, HBASE-5237_trunk.patch As part of HBASE-4397 there is one more scenario where the patch has to be applied. {code} RegionPlan plan = getRegionPlan(state, forceNewPlan); if (plan == null) { debugLog(state.getRegion(), Unable to determine a plan to assign + state); return; // Should get reassigned later when RIT times out. } {code} I think in this scenario also {code} this.timeoutMonitor.setAllRegionServersOffline(true); {code} this should be done. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5160) Backport HBASE-4397 - -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time
[ https://issues.apache.org/jira/browse/HBASE-5160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191989#comment-13191989 ] Hudson commented on HBASE-5160: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5237 Addendum for HBASE-5160 and HBASE-4397(Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java Backport HBASE-4397 - -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time --- Key: HBASE-5160 URL: https://issues.apache.org/jira/browse/HBASE-5160 Project: HBase Issue Type: Bug Reporter: ramkrishna.s.vasudevan Fix For: 0.90.6 Attachments: HBASE-5160-AssignmentManager.patch, HBASE-5160_2.patch Backporting to 0.90.6 considering the importance of the issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5243) LogSyncerThread not getting shutdown waiting for the interrupted flag
[ https://issues.apache.org/jira/browse/HBASE-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191987#comment-13191987 ] Hudson commented on HBASE-5243: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5243 Addendum moves the close() method to right place HBASE-5243 LogSyncerThread not getting shutdown waiting for the interrupted flag(Ram). tedyu : Files : * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java LogSyncerThread not getting shutdown waiting for the interrupted flag - Key: HBASE-5243 URL: https://issues.apache.org/jira/browse/HBASE-5243 Project: HBase Issue Type: Bug Affects Versions: 0.90.5 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: 5243-92.addendum, HBASE-5243_0.90.patch, HBASE-5243_0.90_1.patch, HBASE-5243_trunk.patch In the LogSyncer run() we keep looping till this.isInterrupted flag is set. But in some cases the DFSclient is consuming the Interrupted exception. So we are running into infinite loop in some shutdown cases. I would suggest that as we are the ones who tries to close down the LogSyncerThread we can introduce a variable like Close or shutdown and based on the state of this flag along with isInterrupted() we can make the thread stop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
[ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191986#comment-13191986 ] Hudson commented on HBASE-5235: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. (Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. -- Key: HBASE-5235 URL: https://issues.apache.org/jira/browse/HBASE-5235 Project: HBase Issue Type: Bug Affects Versions: 0.90.5, 0.92.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.90.6, 0.92.1 Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch Pls find the analysis. Correct me if am wrong {code} 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting... at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026) {code} Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable {code} private void writerThreadError(Throwable t) { thrown.compareAndSet(null, t); } {code} In the finally block of splitLog we try to close the streams. {code} for (WriterThread t: writerThreads) { try { t.join(); } catch (InterruptedException ie) { throw new IOException(ie); } checkForErrors(); } LOG.info(Split writers finished); return closeStreams(); {code} Inside checkForErrors {code} private void checkForErrors() throws IOException { Throwable thrown = this.thrown.get(); if (thrown == null) return; if (thrown instanceof IOException) { throw (IOException)thrown; } else { throw new RuntimeException(thrown); } } So once we throw the exception the DFSStreamer threads are not getting closed. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3373) Allow regions to be load-balanced by table
[ https://issues.apache.org/jira/browse/HBASE-3373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191991#comment-13191991 ] Hudson commented on HBASE-3373: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5231 Backport HBASE-3373 (per-table load balancing) to 0.92 tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java Allow regions to be load-balanced by table -- Key: HBASE-3373 URL: https://issues.apache.org/jira/browse/HBASE-3373 Project: HBase Issue Type: Improvement Components: master Affects Versions: 0.20.6 Reporter: Ted Yu Assignee: Zhihong Yu Fix For: 0.94.0 Attachments: 3373.txt, HbaseBalancerTest2.java From our experience, cluster can be well balanced and yet, one table's regions may be badly concentrated on few region servers. For example, one table has 839 regions (380 regions at time of table creation) out of which 202 are on one server. It would be desirable for load balancer to distribute regions for specified tables evenly across the cluster. Each of such tables has number of regions many times the cluster size. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5231) Backport HBASE-3373 (per-table load balancing) to 0.92
[ https://issues.apache.org/jira/browse/HBASE-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191992#comment-13191992 ] Hudson commented on HBASE-5231: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5231 Backport HBASE-3373 (per-table load balancing) to 0.92 tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java Backport HBASE-3373 (per-table load balancing) to 0.92 -- Key: HBASE-5231 URL: https://issues.apache.org/jira/browse/HBASE-5231 Project: HBase Issue Type: Improvement Reporter: Zhihong Yu Fix For: 0.92.1 Attachments: 5231-v2.txt, 5231.txt This JIRA backports per-table load balancing to 0.90 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4397) -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time
[ https://issues.apache.org/jira/browse/HBASE-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13191990#comment-13191990 ] Hudson commented on HBASE-4397: --- Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/]) HBASE-5237 Addendum for HBASE-5160 and HBASE-4397(Ram) ramkrishna : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs are shutdown at the same time - Key: HBASE-4397 URL: https://issues.apache.org/jira/browse/HBASE-4397 Project: HBase Issue Type: Bug Reporter: Ming Ma Assignee: Ming Ma Fix For: 0.94.0, 0.92.0 Attachments: HBASE-4397-0.92.patch 1. Shutdown all RSs. 2. Bring all RS back online. The -ROOT-, .META. stay in offline state until timeout monitor force assignment 30 minutes later. That is because HMaster can't find a RS to assign the tables to in assign operation. 011-09-13 13:25:52,743 WARN org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to sea-lab-4,60020,1315870341387, trying to assign elsewhere instead; retry=0 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:345) at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1002) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:854) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:148) at $Proxy9.openRegion(Unknown Source) at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:407) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1408) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1153) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1128) at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1123) at org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:1788) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRoot(ServerShutdownHandler.java:100) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.verifyAndAssignRootWithRetries(ServerShutdownHandler.java:118) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:181) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:167) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2011-09-13 13:25:52,743 WARN org.apache.hadoop.hbase.master.AssignmentManager: Unable to find a viable location to assign region -ROOT-,,0.70236052 Possible fixes: 1. Have serverManager handle server online event similar to how RegionServerTracker.java calls servermanager.expireServer in the case server goes down. 2. Make timeoutMonitor handle the situation better. This is a special situation in the cluster. 30 minutes timeout can be skipped. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5231) Backport HBASE-3373 (per-table load balancing) to 0.92
[ https://issues.apache.org/jira/browse/HBASE-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13192794#comment-13192794 ] Hudson commented on HBASE-5231: --- Integrated in HBase-0.92 #261 (See [https://builds.apache.org/job/HBase-0.92/261/]) HBASE-5231 revert - need to add unit test for per table load balancing tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java Backport HBASE-3373 (per-table load balancing) to 0.92 -- Key: HBASE-5231 URL: https://issues.apache.org/jira/browse/HBASE-5231 Project: HBase Issue Type: Improvement Reporter: Zhihong Yu Fix For: 0.92.1 Attachments: 5231-v2.txt, 5231.addendum, 5231.txt This JIRA backports per-table load balancing to 0.90 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server
[ https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13192876#comment-13192876 ] Hudson commented on HBASE-4720: --- Integrated in HBase-TRUNK-security #89 (See [https://builds.apache.org/job/HBase-TRUNK-security/89/]) HBASE-4720 revert until agreement is reached on solution HBASE-4720 Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server (Mubarak) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server Key: HBASE-4720 URL: https://issues.apache.org/jira/browse/HBASE-4720 Project: HBase Issue Type: Improvement Reporter: Daniel Lord Assignee: Mubarak Seyed Fix For: 0.94.0 Attachments: HBASE-4720.trunk.v1.patch, HBASE-4720.trunk.v2.patch, HBASE-4720.trunk.v3.patch, HBASE-4720.trunk.v4.patch, HBASE-4720.trunk.v5.patch, HBASE-4720.trunk.v6.patch, HBASE-4720.v1.patch, HBASE-4720.v3.patch I have several large application/HBase clusters where an application node will occasionally need to talk to HBase from a different cluster. In order to help ensure some of my consistency guarantees I have a sentinel table that is updated atomically as users interact with the system. This works quite well for the regular hbase client but the REST client does not implement the checkAndPut and checkAndDelete operations. This exposes the application to some race conditions that have to be worked around. It would be ideal if the same checkAndPut/checkAndDelete operations could be supported by the REST client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5278) HBase shell script refers to removed migrate functionality
[ https://issues.apache.org/jira/browse/HBASE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193396#comment-13193396 ] Hudson commented on HBASE-5278: --- Integrated in HBase-0.92 #262 (See [https://builds.apache.org/job/HBase-0.92/262/]) HBASE-5278 HBase shell script refers to removed 'migrate' functionality stack : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/bin/hbase HBase shell script refers to removed migrate functionality Key: HBASE-5278 URL: https://issues.apache.org/jira/browse/HBASE-5278 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 0.94.0, 0.90.5, 0.92.0 Reporter: Shaneal Manek Assignee: Shaneal Manek Priority: Trivial Fix For: 0.94.0, 0.92.1 Attachments: hbase-5278.patch $ hbase migrate Exception in thread main java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Migrate Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.Migrate at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: org.apache.hadoop.hbase.util.Migrate. Program will exit. The 'hbase' shell script has docs referring to a 'migrate' command which no longer exists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5230) Ensure compactions do not cache-on-write data blocks
[ https://issues.apache.org/jira/browse/HBASE-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193545#comment-13193545 ] Hudson commented on HBASE-5230: --- Integrated in HBase-TRUNK #2646 (See [https://builds.apache.org/job/HBase-TRUNK/2646/]) HBASE-5230 : ensure that compactions do not cache-on-write data blocks mbautin : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java Ensure compactions do not cache-on-write data blocks Key: HBASE-5230 URL: https://issues.apache.org/jira/browse/HBASE-5230 Project: HBase Issue Type: Improvement Reporter: Mikhail Bautin Assignee: Mikhail Bautin Priority: Minor Attachments: D1353.1.patch, D1353.2.patch, D1353.3.patch, D1353.4.patch, Don-t-cache-data-blocks-on-compaction-2012-01-21_00_53_54.patch, Don-t-cache-data-blocks-on-compaction-2012-01-23_10_23_45.patch, Don-t-cache-data-blocks-on-compaction-2012-01-23_15_27_23.patch Create a unit test for HBASE-3976 (making sure we don't cache data blocks on write during compactions even if cache-on-write is enabled generally enabled). This is because we have very different implementations of HBASE-3976 without HBASE-4422 CacheConfig (on top of 89-fb, created by Liyin) and with CacheConfig (presumably it's there but not sure if it even works, since the patch in HBASE-3976 may not have been committed). We need to create a unit test to verify that we don't cache data blocks on write during compactions, and resolve HBASE-3976 so that this new unit test does not fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4720) Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server
[ https://issues.apache.org/jira/browse/HBASE-4720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193546#comment-13193546 ] Hudson commented on HBASE-4720: --- Integrated in HBase-TRUNK #2646 (See [https://builds.apache.org/job/HBase-TRUNK/2646/]) HBASE-4720 revert until agreement is reached on solution HBASE-4720 Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server (Mubarak) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndDeleteTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutRowResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/CheckAndPutTableResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java Implement atomic update operations (checkAndPut, checkAndDelete) for REST client/server Key: HBASE-4720 URL: https://issues.apache.org/jira/browse/HBASE-4720 Project: HBase Issue Type: Improvement Reporter: Daniel Lord Assignee: Mubarak Seyed Fix For: 0.94.0 Attachments: HBASE-4720.trunk.v1.patch, HBASE-4720.trunk.v2.patch, HBASE-4720.trunk.v3.patch, HBASE-4720.trunk.v4.patch, HBASE-4720.trunk.v5.patch, HBASE-4720.trunk.v6.patch, HBASE-4720.v1.patch, HBASE-4720.v3.patch I have several large application/HBase clusters where an application node will occasionally need to talk to HBase from a different cluster. In order to help ensure some of my consistency guarantees I have a sentinel table that is updated atomically as users interact with the system. This works quite well for the regular hbase client but the REST client does not implement the checkAndPut and checkAndDelete operations. This exposes the application to some race conditions that have to be worked around. It would be ideal if the same checkAndPut/checkAndDelete operations could be supported by the REST client. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)
[ https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193543#comment-13193543 ] Hudson commented on HBASE-4218: --- Integrated in HBase-TRUNK #2646 (See [https://builds.apache.org/job/HBase-TRUNK/2646/]) [jira] [HBASE-4218] HFile data block encoding framework and delta encoding implementation (Jacek Midgal, Mikhail Bautin) Summary: Adding a framework that allows to encode keys in an HFile data block. We support two modes of encoding: (1) both on disk and in cache, and (2) in cache only. This is distinct from compression that is already being done in HBase, e.g. GZ or LZO. When data block encoding is enabled, we store blocks in cache in an uncompressed but encoded form. This allows to fit more blocks in cache and reduce the number of disk reads. The most common example of data block encoding is delta encoding, where we take advantage of the fact that HFile keys are sorted and share a lot of common prefixes, and only store the delta between each pair of consecutive keys. Initial encoding algorithms implemented are DIFF, FAST_DIFF, and PREFIX. This is based on the delta encoding patch developed by Jacek Midgal during his 2011 summer internship at Facebook. The original patch is available here: https://reviews.apache.org/r/2308/diff/. Test Plan: Unit tests. Distributed load test on a five-node cluster. Reviewers: JIRA, tedyu, stack, nspiegelberg, Kannan Reviewed By: Kannan CC: tedyu, todd, mbautin, stack, Kannan, mcorgan, gqchen Differential Revision: https://reviews.facebook.net/D447 mbautin : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncoderBufferTooSmallException.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java *
[jira] [Commented] (HBASE-5278) HBase shell script refers to removed migrate functionality
[ https://issues.apache.org/jira/browse/HBASE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193544#comment-13193544 ] Hudson commented on HBASE-5278: --- Integrated in HBase-TRUNK #2646 (See [https://builds.apache.org/job/HBase-TRUNK/2646/]) HBASE-5278 HBase shell script refers to removed 'migrate' functionality stack : Files : * /hbase/trunk/bin/hbase HBase shell script refers to removed migrate functionality Key: HBASE-5278 URL: https://issues.apache.org/jira/browse/HBASE-5278 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 0.94.0, 0.90.5, 0.92.0 Reporter: Shaneal Manek Assignee: Shaneal Manek Priority: Trivial Fix For: 0.94.0, 0.92.1 Attachments: hbase-5278.patch $ hbase migrate Exception in thread main java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Migrate Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.Migrate at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: org.apache.hadoop.hbase.util.Migrate. Program will exit. The 'hbase' shell script has docs referring to a 'migrate' command which no longer exists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5230) Ensure compactions do not cache-on-write data blocks
[ https://issues.apache.org/jira/browse/HBASE-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193566#comment-13193566 ] Hudson commented on HBASE-5230: --- Integrated in HBase-TRUNK-security #90 (See [https://builds.apache.org/job/HBase-TRUNK-security/90/]) HBASE-5230 : ensure that compactions do not cache-on-write data blocks mbautin : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java Ensure compactions do not cache-on-write data blocks Key: HBASE-5230 URL: https://issues.apache.org/jira/browse/HBASE-5230 Project: HBase Issue Type: Improvement Reporter: Mikhail Bautin Assignee: Mikhail Bautin Priority: Minor Attachments: D1353.1.patch, D1353.2.patch, D1353.3.patch, D1353.4.patch, Don-t-cache-data-blocks-on-compaction-2012-01-21_00_53_54.patch, Don-t-cache-data-blocks-on-compaction-2012-01-23_10_23_45.patch, Don-t-cache-data-blocks-on-compaction-2012-01-23_15_27_23.patch Create a unit test for HBASE-3976 (making sure we don't cache data blocks on write during compactions even if cache-on-write is enabled generally enabled). This is because we have very different implementations of HBASE-3976 without HBASE-4422 CacheConfig (on top of 89-fb, created by Liyin) and with CacheConfig (presumably it's there but not sure if it even works, since the patch in HBASE-3976 may not have been committed). We need to create a unit test to verify that we don't cache data blocks on write during compactions, and resolve HBASE-3976 so that this new unit test does not fail. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4218) Data Block Encoding of KeyValues (aka delta encoding / prefix compression)
[ https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193564#comment-13193564 ] Hudson commented on HBASE-4218: --- Integrated in HBase-TRUNK-security #90 (See [https://builds.apache.org/job/HBase-TRUNK-security/90/]) [jira] [HBASE-4218] HFile data block encoding framework and delta encoding implementation (Jacek Midgal, Mikhail Bautin) Summary: Adding a framework that allows to encode keys in an HFile data block. We support two modes of encoding: (1) both on disk and in cache, and (2) in cache only. This is distinct from compression that is already being done in HBase, e.g. GZ or LZO. When data block encoding is enabled, we store blocks in cache in an uncompressed but encoded form. This allows to fit more blocks in cache and reduce the number of disk reads. The most common example of data block encoding is delta encoding, where we take advantage of the fact that HFile keys are sorted and share a lot of common prefixes, and only store the delta between each pair of consecutive keys. Initial encoding algorithms implemented are DIFF, FAST_DIFF, and PREFIX. This is based on the delta encoding patch developed by Jacek Midgal during his 2011 summer internship at Facebook. The original patch is available here: https://reviews.apache.org/r/2308/diff/. Test Plan: Unit tests. Distributed load test on a five-node cluster. Reviewers: JIRA, tedyu, stack, nspiegelberg, Kannan Reviewed By: Kannan CC: tedyu, todd, mbautin, stack, Kannan, mcorgan, gqchen Differential Revision: https://reviews.facebook.net/D447 mbautin : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoding.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/EncoderBufferTooSmallException.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileWriter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockType.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java *
[jira] [Commented] (HBASE-5278) HBase shell script refers to removed migrate functionality
[ https://issues.apache.org/jira/browse/HBASE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193773#comment-13193773 ] Hudson commented on HBASE-5278: --- Integrated in HBase-0.92-security #89 (See [https://builds.apache.org/job/HBase-0.92-security/89/]) HBASE-5278 HBase shell script refers to removed 'migrate' functionality stack : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/bin/hbase HBase shell script refers to removed migrate functionality Key: HBASE-5278 URL: https://issues.apache.org/jira/browse/HBASE-5278 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 0.94.0, 0.90.5, 0.92.0 Reporter: Shaneal Manek Assignee: Shaneal Manek Priority: Trivial Fix For: 0.94.0, 0.92.1 Attachments: hbase-5278.patch $ hbase migrate Exception in thread main java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/util/Migrate Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.util.Migrate at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: org.apache.hadoop.hbase.util.Migrate. Program will exit. The 'hbase' shell script has docs referring to a 'migrate' command which no longer exists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5231) Backport HBASE-3373 (per-table load balancing) to 0.92
[ https://issues.apache.org/jira/browse/HBASE-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13193774#comment-13193774 ] Hudson commented on HBASE-5231: --- Integrated in HBase-0.92-security #89 (See [https://builds.apache.org/job/HBase-0.92-security/89/]) HBASE-5231 revert - need to add unit test for per table load balancing tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/DefaultLoadBalancer.java * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java Backport HBASE-3373 (per-table load balancing) to 0.92 -- Key: HBASE-5231 URL: https://issues.apache.org/jira/browse/HBASE-5231 Project: HBase Issue Type: Improvement Reporter: Zhihong Yu Fix For: 0.92.1 Attachments: 5231-v2.txt, 5231.addendum, 5231.txt This JIRA backports per-table load balancing to 0.90 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5271) Result.getValue and Result.getColumnLatest return the wrong column.
[ https://issues.apache.org/jira/browse/HBASE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13194186#comment-13194186 ] Hudson commented on HBASE-5271: --- Integrated in HBase-0.92 #263 (See [https://builds.apache.org/job/HBase-0.92/263/]) HBASE-5271 Result.getValue and Result.getColumnLatest return the wrong column (Ghais Issa) tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java Result.getValue and Result.getColumnLatest return the wrong column. --- Key: HBASE-5271 URL: https://issues.apache.org/jira/browse/HBASE-5271 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.5 Reporter: Ghais Issa Assignee: Ghais Issa Fix For: 0.94.0, 0.90.7, 0.92.1 Attachments: 5271-90.txt, 5271-v2.txt, fixKeyValueMatchingColumn.diff, testGetValue.diff In the following example result.getValue returns the wrong column KeyValue kv = new KeyValue(Bytes.toBytes(r), Bytes.toBytes(24), Bytes.toBytes(2), Bytes.toBytes(7L)); Result result = new Result(new KeyValue[] { kv }); System.out.println(Bytes.toLong(result.getValue(Bytes.toBytes(2), Bytes.toBytes(2; //prints 7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.
[ https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13194381#comment-13194381 ] Hudson commented on HBASE-5282: --- Integrated in HBase-0.92 #265 (See [https://builds.apache.org/job/HBase-0.92/265/]) HBASE-5282 Possible file handle leak with truncated HLog file jmhsieh : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java Possible file handle leak with truncated HLog file. --- Key: HBASE-5282 URL: https://issues.apache.org/jira/browse/HBASE-5282 Project: HBase Issue Type: Bug Affects Versions: 0.94.0, 0.90.5, 0.92.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.94.0, 0.92.1 Attachments: hbase-5282.patch, hbase-5282.v2.patch When debugging hbck, found that the code responsible for this exception can leak open file handles. {code} 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from hdfs://haus01. sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered .edits/3211315; minSequenceid=3214658 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of region=test5,8 \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840 113e. java.io.EOFException at java.io.DataInputStream.readByte(DataInputStream.java:250) at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299) at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320) at org.apache.hadoop.io.Text.readString(Text.java:400) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1437) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1424) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1419) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:57) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158) at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.
[ https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13194500#comment-13194500 ] Hudson commented on HBASE-5282: --- Integrated in HBase-TRUNK-security #92 (See [https://builds.apache.org/job/HBase-TRUNK-security/92/]) HBASE-5282 Possible file handle leak with truncated HLog file jmhsieh : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java Possible file handle leak with truncated HLog file. --- Key: HBASE-5282 URL: https://issues.apache.org/jira/browse/HBASE-5282 Project: HBase Issue Type: Bug Affects Versions: 0.94.0, 0.90.5, 0.92.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.94.0, 0.92.1 Attachments: hbase-5282.patch, hbase-5282.v2.patch When debugging hbck, found that the code responsible for this exception can leak open file handles. {code} 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from hdfs://haus01. sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered .edits/3211315; minSequenceid=3214658 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of region=test5,8 \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840 113e. java.io.EOFException at java.io.DataInputStream.readByte(DataInputStream.java:250) at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299) at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320) at org.apache.hadoop.io.Text.readString(Text.java:400) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1437) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1424) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1419) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:57) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158) at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction
[ https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13194501#comment-13194501 ] Hudson commented on HBASE-5274: --- Integrated in HBase-TRUNK-security #92 (See [https://builds.apache.org/job/HBase-TRUNK-security/92/]) [jira] [HBASE-5274] Filter out expired scanners on compaction as well Summary: This is a followup for D1017 to make it similar to D909 (89-fb). The fix for 89-fb used the TTL-based scanner filtering logic on both normal scanners and compactions, while the trunk fix D1017 did not. This is just the delta between the two diffs that brings filtering expired store files on compaction to trunk. Test Plan: Unit tests Reviewers: Liyin, JIRA, lhofhansl, Kannan Reviewed By: Liyin CC: Liyin, tedyu, Kannan, mbautin, lhofhansl Differential Revision: https://reviews.facebook.net/D1473 mbautin : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java Filter out the expired store file scanner during the compaction --- Key: HBASE-5274 URL: https://issues.apache.org/jira/browse/HBASE-5274 Project: HBase Issue Type: Improvement Reporter: Liyin Tang Assignee: Mikhail Bautin Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, D1407.1.patch, D1407.1.patch, D1473.1.patch During the compaction time, HBase will generate a store scanner which will scan a list of store files. And it would be more efficient to filer out the expired store file since there is no need to read any key values from these store files. This optimization has been already implemented on 89-fb and this is the building block for HBASE-5199 as well. It is supposed to be no-ops to compact the expired store files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5271) Result.getValue and Result.getColumnLatest return the wrong column.
[ https://issues.apache.org/jira/browse/HBASE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13194502#comment-13194502 ] Hudson commented on HBASE-5271: --- Integrated in HBase-TRUNK-security #92 (See [https://builds.apache.org/job/HBase-TRUNK-security/92/]) HBASE-5271 Result.getValue and Result.getColumnLatest return the wrong column (Ghais Issa) tedyu : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java Result.getValue and Result.getColumnLatest return the wrong column. --- Key: HBASE-5271 URL: https://issues.apache.org/jira/browse/HBASE-5271 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.5 Reporter: Ghais Issa Assignee: Ghais Issa Fix For: 0.94.0, 0.90.7, 0.92.1 Attachments: 5271-90.txt, 5271-v2.txt, fixKeyValueMatchingColumn.diff, testGetValue.diff In the following example result.getValue returns the wrong column KeyValue kv = new KeyValue(Bytes.toBytes(r), Bytes.toBytes(24), Bytes.toBytes(2), Bytes.toBytes(7L)); Result result = new Result(new KeyValue[] { kv }); System.out.println(Bytes.toLong(result.getValue(Bytes.toBytes(2), Bytes.toBytes(2; //prints 7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5274) Filter out the expired store file scanner during the compaction
[ https://issues.apache.org/jira/browse/HBASE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13196715#comment-13196715 ] Hudson commented on HBASE-5274: --- Integrated in HBase-TRUNK #2648 (See [https://builds.apache.org/job/HBase-TRUNK/2648/]) [jira] [HBASE-5274] Filter out expired scanners on compaction as well Summary: This is a followup for D1017 to make it similar to D909 (89-fb). The fix for 89-fb used the TTL-based scanner filtering logic on both normal scanners and compactions, while the trunk fix D1017 did not. This is just the delta between the two diffs that brings filtering expired store files on compaction to trunk. Test Plan: Unit tests Reviewers: Liyin, JIRA, lhofhansl, Kannan Reviewed By: Liyin CC: Liyin, tedyu, Kannan, mbautin, lhofhansl Differential Revision: https://reviews.facebook.net/D1473 mbautin : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java Filter out the expired store file scanner during the compaction --- Key: HBASE-5274 URL: https://issues.apache.org/jira/browse/HBASE-5274 Project: HBase Issue Type: Improvement Reporter: Liyin Tang Assignee: Mikhail Bautin Attachments: D1407.1.patch, D1407.1.patch, D1407.1.patch, D1407.1.patch, D1407.1.patch, D1473.1.patch During the compaction time, HBase will generate a store scanner which will scan a list of store files. And it would be more efficient to filer out the expired store file since there is no need to read any key values from these store files. This optimization has been already implemented on 89-fb and this is the building block for HBASE-5199 as well. It is supposed to be no-ops to compact the expired store files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5297) Update metrics numOpenConnections and callQueueLen directly in HBaseServer
[ https://issues.apache.org/jira/browse/HBASE-5297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13196714#comment-13196714 ] Hudson commented on HBASE-5297: --- Integrated in HBase-TRUNK #2648 (See [https://builds.apache.org/job/HBase-TRUNK/2648/]) HBASE-5297 Update metrics numOpenConnections and callQueueLen directly in HBaseServer (Scott Chen) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java Update metrics numOpenConnections and callQueueLen directly in HBaseServer -- Key: HBASE-5297 URL: https://issues.apache.org/jira/browse/HBASE-5297 Project: HBase Issue Type: Improvement Components: metrics Reporter: Scott Chen Assignee: Scott Chen Priority: Minor Fix For: 0.94.0 Attachments: HBASE-5297.D1509.1.patch, HBASE-5297.D1509.2.patch, HBASE-5297.D1509.3.patch It's better to directly update the metrics outside HBaseRpcMetrics so that HBaseRpcMetrics doesn't have to hold reference to HBaseServer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5256) Use WritableUtils.readVInt() in RegionLoad.readFields()
[ https://issues.apache.org/jira/browse/HBASE-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13197507#comment-13197507 ] Hudson commented on HBASE-5256: --- Integrated in HBase-TRUNK #2649 (See [https://builds.apache.org/job/HBase-TRUNK/2649/]) HBASE-5256 Use WritableUtils.readVInt() in RegionLoad.readFields() (Mubarak) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HServerLoad.java Use WritableUtils.readVInt() in RegionLoad.readFields() --- Key: HBASE-5256 URL: https://issues.apache.org/jira/browse/HBASE-5256 Project: HBase Issue Type: Task Reporter: Zhihong Yu Assignee: Mubarak Seyed Fix For: 0.94.0 Attachments: HBASE-5256.trunk.v1.patch Currently in.readInt() is used in RegionLoad.readFields() More metrics would be added to RegionLoad in the future, we should utilize WritableUtils.readVInt() to reduce the amount of data exchanged between Master and region servers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5259) Normalize the RegionLocation in TableInputFormat by the reverse DNS lookup.
[ https://issues.apache.org/jira/browse/HBASE-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13197508#comment-13197508 ] Hudson commented on HBASE-5259: --- Integrated in HBase-TRUNK #2649 (See [https://builds.apache.org/job/HBase-TRUNK/2649/]) HBASE-5259 Normalize the RegionLocation in TableInputFormat by the reverse DNS lookup (Liyin Tang) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java Normalize the RegionLocation in TableInputFormat by the reverse DNS lookup. --- Key: HBASE-5259 URL: https://issues.apache.org/jira/browse/HBASE-5259 Project: HBase Issue Type: Improvement Reporter: Liyin Tang Assignee: Liyin Tang Fix For: 0.94.0 Attachments: D1413.1.patch, D1413.1.patch, D1413.1.patch, D1413.1.patch, D1413.2.patch, D1413.2.patch, D1413.2.patch, D1413.2.patch, D1413.3.patch, D1413.3.patch, D1413.3.patch, D1413.3.patch, HBASE-5259.patch Assuming the HBase and MapReduce running in the same cluster, the TableInputFormat is to override the split function which divides all the regions from one particular table into a series of mapper tasks. So each mapper task can process a region or one part of a region. Ideally, the mapper task should run on the same machine on which the region server hosts the corresponding region. That's the motivation that the TableInputFormat sets the RegionLocation so that the MapReduce framework can respect the node locality. The code simply set the host name of the region server as the HRegionLocation. However, the host name of the region server may have different format with the host name of the task tracker (Mapper task). The task tracker always gets its hostname by the reverse DNS lookup. And the DNS service may return different host name format. For example, the host name of the region server is correctly set as a.b.c.d while the reverse DNS lookup may return a.b.c.d. (With an additional doc in the end). So the solution is to set the RegionLocation by the reverse DNS lookup as well. No matter what host name format the DNS system is using, the TableInputFormat has the responsibility to keep the consistent host name format with the MapReduce framework. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5290) [FindBugs] Synchronization on boxed primitive
[ https://issues.apache.org/jira/browse/HBASE-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13197506#comment-13197506 ] Hudson commented on HBASE-5290: --- Integrated in HBase-TRUNK #2649 (See [https://builds.apache.org/job/HBase-TRUNK/2649/]) HBASE-5290 [FindBugs] Synchronization on boxed primitive (Ben West) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/CompactSelection.java [FindBugs] Synchronization on boxed primitive - Key: HBASE-5290 URL: https://issues.apache.org/jira/browse/HBASE-5290 Project: HBase Issue Type: Bug Affects Versions: 0.94.0 Reporter: Liyin Tang Assignee: Ben West Priority: Minor Fix For: 0.94.0 Attachments: 5290-v3.txt, 5290-v4.txt, HBASE-5290.patch, HBASE-5290.patch, HBASE-5290v2.patch This bug is reported by the findBugs tool, which is a static analysis tool. Bug: Synchronization on Integer in org.apache.hadoop.hbase.regionserver.compactions.CompactSelection.emptyFileList() The code synchronizes on a boxed primitive constant, such as an Integer. {code} private static Integer count = 0; ... synchronized(count) { count++; } ... {code} Since Integer objects can be cached and shared, this code could be synchronizing on the same object as other, unrelated code, leading to unresponsiveness and possible deadlock See CERT CON08-J. Do not synchronize on objects that may be reused for more information. Confidence: Normal, Rank: Troubling (14) Pattern: DL_SYNCHRONIZATION_ON_BOXED_PRIMITIVE Type: DL, Category: MT_CORRECTNESS (Multithreaded correctness) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5286) bin/hbase's logic of adding Hadoop jar files to the classpath is fragile when presented with split packaged Hadoop 0.23 installation
[ https://issues.apache.org/jira/browse/HBASE-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13197509#comment-13197509 ] Hudson commented on HBASE-5286: --- Integrated in HBase-TRUNK #2649 (See [https://builds.apache.org/job/HBase-TRUNK/2649/]) HBASE-5286 Added pointer to Lars Hofhansl's blog describing his attempt at adding a '...column prefix delete marker' stack : Files : * /hbase/trunk/src/docbkx/book.xml bin/hbase's logic of adding Hadoop jar files to the classpath is fragile when presented with split packaged Hadoop 0.23 installation Key: HBASE-5286 URL: https://issues.apache.org/jira/browse/HBASE-5286 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 0.92.0 Reporter: Roman Shaposhnik Assignee: Roman Shaposhnik Here's the bit from bin/hbase that might need TLC now that Hadoop can be spotted in the wild in split-package configuration: {noformat} #If avail, add Hadoop to the CLASSPATH and to the JAVA_LIBRARY_PATH if [ ! -z $HADOOP_HOME ]; then HADOOPCPPATH= if [ -z $HADOOP_CONF_DIR ]; then HADOOPCPPATH=$(append_path ${HADOOPCPPATH} ${HADOOP_HOME}/conf) else HADOOPCPPATH=$(append_path ${HADOOPCPPATH} ${HADOOP_CONF_DIR}) fi if [ `echo ${HADOOP_HOME}/hadoop-core*.jar` != ${HADOOP_HOME}/hadoop-core*.jar ] ; then HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls ${HADOOP_HOME}/hadoop-core*.jar | head -1`) else HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls ${HADOOP_HOME}/hadoop-common*.jar | head -1`) HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls ${HADOOP_HOME}/hadoop-hdfs*.jar | head -1`) HADOOPCPPATH=$(append_path ${HADOOPCPPATH} `ls ${HADOOP_HOME}/hadoop-mapred*.jar | head -1`) fi {noformat} There's a couple of issues with the above code: 0. HADOOP_HOME is now deprecated in Hadoop 0.23 1. the list of jar files added to the class-path should be revised 2. we need to figure out a more robust way to get the jar files that are needed to the classpath (things like hadoop-mapred*.jar tend to match src/test jars as well) Better yet, it would be useful to look into whether we can transition HBase's bin/hbase onto using bin/hadoop as a launcher script instead of direct JAVA invocations (Pig, Hive, Sqoop and Mahout already do that) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5307) Unable to gracefully decommission a node because of script error
[ https://issues.apache.org/jira/browse/HBASE-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13197510#comment-13197510 ] Hudson commented on HBASE-5307: --- Integrated in HBase-TRUNK #2649 (See [https://builds.apache.org/job/HBase-TRUNK/2649/]) HBASE-5307 Unable to gracefully decommission a node because of script error stack : Files : * /hbase/trunk/bin/region_mover.rb Unable to gracefully decommission a node because of script error Key: HBASE-5307 URL: https://issues.apache.org/jira/browse/HBASE-5307 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 0.92.0 Reporter: YiFeng Jiang Fix For: 0.92.1 Attachments: region_mover_HBASE-5307-0.92.patch Unable to gracefully decommission a node because NameError occurred in region_mover.rb {code} $ bin/graceful_stop.sh ip-10-160-226-84.us-west-1.compute.internal ... NameError: no constructorfor arguments (org.jruby.RubyString) on Java::OrgApacheHadoopHbase::HServerAddress available overloads: (org.apache.hadoop.hbase.HServerAddress) (java.net.InetSocketAddress) getRegions at /usr/local/hbase/current/bin/region_mover.rb:254 unloadRegions at /usr/local/hbase/current/bin/region_mover.rb:314 (root) at /usr/local/hbase/current/bin/region_mover.rb:430 Unloaded ip-10-160-226-84.us-west-1.compute.internal region(s) ip-10-160-226-84.us-west-1.compute.internal: stopping regionserver.. {code} The reason is the region_mover.rb calls wrong HBase APIs to try to establish a connection to the region server. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5295) Improve the Thrift API to switch on/off writing to wal for Mutations
[ https://issues.apache.org/jira/browse/HBASE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13198087#comment-13198087 ] Hudson commented on HBASE-5295: --- Integrated in HBase-TRUNK #2650 (See [https://builds.apache.org/job/HBase-TRUNK/2650/]) HBASE-5295 Improve the Thrift API to switch on/off writing to wal for Mutations (Dhruba) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/THBaseService.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java * /hbase/trunk/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift * /hbase/trunk/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java Improve the Thrift API to switch on/off writing to wal for Mutations - Key: HBASE-5295 URL: https://issues.apache.org/jira/browse/HBASE-5295 Project: HBase Issue Type: Improvement Components: thrift Reporter: dhruba borthakur Assignee: dhruba borthakur Attachments: D1515.1.patch, D1515.1.patch, D1515.1.patch, D1515.1.patch, D1515.2.patch, D1515.2.patch, D1515.2.patch, D1515.2.patch The thrift api currently does not support switching off updating wal for Puts/Deletes. Support it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5283) Request counters may become negative for heavily loaded regions
[ https://issues.apache.org/jira/browse/HBASE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13198089#comment-13198089 ] Hudson commented on HBASE-5283: --- Integrated in HBase-TRUNK #2650 (See [https://builds.apache.org/job/HBase-TRUNK/2650/]) HBASE-5283 Request counters may become negative for heavily loaded regions (Mubarak) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HServerLoad.java Request counters may become negative for heavily loaded regions --- Key: HBASE-5283 URL: https://issues.apache.org/jira/browse/HBASE-5283 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Zhihong Yu Assignee: Mubarak Seyed Fix For: 0.94.0 Attachments: 5283.txt, HBASE-5283.trunk.v1.patch Requests counter showing negative count, example under 'Requests' column: -645470239 {code} Name Region Server Start Key End Key Requests usertable,user2037516127892189021,1326756873774.16833e4566d1daef109b8fdcd1f4b5a6. xxx.com:60030 user2037516127892189021 user2296868939942738705 -645470239 {code} RegionLoad.readRequestsCount and RegionLoad.writeRequestsCount are of int type. Our Ops has been running lots of heavy load operation. RegionLoad.getRequestsCount() overflows int.MAX_VALUE. It is set to D986E7E1. In table.jsp, RegionLoad.getRequestsCount() is assigned to long type. D986E7E1 is converted to long D986E7E1 which is -645470239 in decimal. Suggested fix is to make readRequestsCount and writeRequestsCount long type. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5212) Fix test TestTableMapReduce against 0.23.
[ https://issues.apache.org/jira/browse/HBASE-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13198088#comment-13198088 ] Hudson commented on HBASE-5212: --- Integrated in HBase-TRUNK #2650 (See [https://builds.apache.org/job/HBase-TRUNK/2650/]) HBASE-5212 Fix test TestTableMapReduce against 0.23 (Ted and Gregory) tedyu : Files : * /hbase/trunk/pom.xml * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java Fix test TestTableMapReduce against 0.23. - Key: HBASE-5212 URL: https://issues.apache.org/jira/browse/HBASE-5212 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Mahadev konar Assignee: Gregory Chanan Fix For: 0.94.0 Attachments: 5212-v2.txt, HBASE-5212-v3.patch, HBASE-5212.patch As reported by Andrew on the hadoop mailing list, mvn -Dhadoop.profile=23 clean test -Dtest=org.apache.hadoop.hbase.mapreduce.TestTableMapReduce fails on 0.92 branch. There are minor changes to HBase poms required to fix that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5310) HConnectionManager server cache key enhancement
[ https://issues.apache.org/jira/browse/HBASE-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13198086#comment-13198086 ] Hudson commented on HBASE-5310: --- Integrated in HBase-TRUNK #2650 (See [https://builds.apache.org/job/HBase-TRUNK/2650/]) HBASE-5310 HConnectionManager server cache key enhancement (Jimmy Xiang) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java HConnectionManager server cache key enhancement --- Key: HBASE-5310 URL: https://issues.apache.org/jira/browse/HBASE-5310 Project: HBase Issue Type: Improvement Components: client Affects Versions: 0.94.0 Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor Fix For: 0.94.0 Attachments: hbase-5310.txt HConnectionManager uses deprecated HServerAddress to create server cache key which needs to resolve the address every time. It should be better to use HRegionLocation.getHostnamePort() instead. In our cluster we have some DNS issue, resolving an address fails sometime which kills the application since it is a runtime exception IllegalArgumentException thrown at HServerAddress.getResolvedAddress. This change will fix this issue as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5212) Fix test TestTableMapReduce against 0.23.
[ https://issues.apache.org/jira/browse/HBASE-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13198654#comment-13198654 ] Hudson commented on HBASE-5212: --- Integrated in HBase-0.92 #270 (See [https://builds.apache.org/job/HBase-0.92/270/]) HBASE-5212 Fix test TestTableMapReduce against 0.23 (Ted and Gregory) jmhsieh : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/pom.xml * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java Fix test TestTableMapReduce against 0.23. - Key: HBASE-5212 URL: https://issues.apache.org/jira/browse/HBASE-5212 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Mahadev konar Assignee: Gregory Chanan Fix For: 0.94.0, 0.92.1 Attachments: 5212-v2.txt, HBASE-5212-v3.patch, HBASE-5212.patch As reported by Andrew on the hadoop mailing list, mvn -Dhadoop.profile=23 clean test -Dtest=org.apache.hadoop.hbase.mapreduce.TestTableMapReduce fails on 0.92 branch. There are minor changes to HBase poms required to fix that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5266) Add documentation for ColumnRangeFilter
[ https://issues.apache.org/jira/browse/HBASE-5266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199271#comment-13199271 ] Hudson commented on HBASE-5266: --- Integrated in HBase-TRUNK #2652 (See [https://builds.apache.org/job/HBase-TRUNK/2652/]) HBASE-5266 Add documentation for ColumnRangeFilter larsh : Files : * /hbase/trunk/src/docbkx/book.xml Add documentation for ColumnRangeFilter --- Key: HBASE-5266 URL: https://issues.apache.org/jira/browse/HBASE-5266 Project: HBase Issue Type: Sub-task Components: documentation Reporter: Lars Hofhansl Assignee: Lars Hofhansl Priority: Minor Fix For: 0.94.0 Attachments: 5266-v2.txt, 5266-v3.txt, 5266.txt There are only a few lines of documentation for ColumnRangeFilter. Given the usefulness of this filter for efficient intra-row scanning (see HBASE-5229 and HBASE-4256), we should make this filter more prominent in the documentation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5186) Add metrics to ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199272#comment-13199272 ] Hudson commented on HBASE-5186: --- Integrated in HBase-TRUNK #2652 (See [https://builds.apache.org/job/HBase-TRUNK/2652/]) HBASE-5186 Add metrics to ThriftServer (Scott Chen) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/CallQueue.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/HbaseHandlerMetricsProxy.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java Add metrics to ThriftServer --- Key: HBASE-5186 URL: https://issues.apache.org/jira/browse/HBASE-5186 Project: HBase Issue Type: Improvement Reporter: Scott Chen Assignee: Scott Chen Fix For: 0.94.0 Attachments: 5186-v10.txt, 5186-v11.txt, 5186-v12.txt, 5186-v9.txt, HBASE-5186.D1461.1.patch, HBASE-5186.D1461.2.patch, HBASE-5186.D1461.3.patch, HBASE-5186.D1461.4.patch, HBASE-5186.D1461.5.patch, HBASE-5186.D1461.6.patch, HBASE-5186.D1461.7.patch, HBASE-5186.D1461.8.patch It will be useful to have some metrics (queue length, waiting time, processing time ...) similar to Hadoop RPC server. This allows us to monitor system health also provide a tool to diagnose the problem where thrift calls are slow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5304) Pluggable split key policy
[ https://issues.apache.org/jira/browse/HBASE-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199537#comment-13199537 ] Hudson commented on HBASE-5304: --- Integrated in HBase-TRUNK-security #96 (See [https://builds.apache.org/job/HBase-TRUNK-security/96/]) HBASE-5304 Pluggable split key policy larsh : Files : * /hbase/trunk/src/docbkx/book.xml * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/PrefixSplitKeyPolicy.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java Pluggable split key policy -- Key: HBASE-5304 URL: https://issues.apache.org/jira/browse/HBASE-5304 Project: HBase Issue Type: Improvement Components: regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 5304-v4.txt, 5304-v5.txt, 5304-v6.txt, 5304-v7.txt We need a way to specify custom policies to determine split keys. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5318) Support Eclipse Indigo
[ https://issues.apache.org/jira/browse/HBASE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199539#comment-13199539 ] Hudson commented on HBASE-5318: --- Integrated in HBase-TRUNK-security #96 (See [https://builds.apache.org/job/HBase-TRUNK-security/96/]) HBASE-5318 Support Eclipse Indigo (Jesse Yates) larsh : Files : * /hbase/trunk/pom.xml Support Eclipse Indigo --- Key: HBASE-5318 URL: https://issues.apache.org/jira/browse/HBASE-5318 Project: HBase Issue Type: Improvement Components: build Affects Versions: 0.94.0 Environment: Eclipse Indigo (1.4.1) which includes m2eclipse (1.0 SR1). Reporter: Jesse Yates Assignee: Jesse Yates Priority: Minor Labels: maven Attachments: mvn_HBASE-5318_r0.patch The current 'standard' release of Eclipse (indigo) comes with m2eclipse installed. However, as of m2e v1.0, interesting lifecycle phases are now handled via a 'connector'. However, several of the plugins we use don't support connectors. This means that eclipse bails out and won't build the project or view it as 'working' even though it builds just fine from the the command line. Since Eclipse is one of the major java IDEs and that Indigo has been around for a while, we should make it easy to for new devs to pick up the code and for older devs to upgrade painlessly. The original build should not be modified in any significant way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5186) Add metrics to ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-5186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13199538#comment-13199538 ] Hudson commented on HBASE-5186: --- Integrated in HBase-TRUNK-security #96 (See [https://builds.apache.org/job/HBase-TRUNK-security/96/]) HBASE-5186 Add metrics to ThriftServer (Scott Chen) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/CallQueue.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/HbaseHandlerMetricsProxy.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java Add metrics to ThriftServer --- Key: HBASE-5186 URL: https://issues.apache.org/jira/browse/HBASE-5186 Project: HBase Issue Type: Improvement Reporter: Scott Chen Assignee: Scott Chen Fix For: 0.94.0 Attachments: 5186-v10.txt, 5186-v11.txt, 5186-v12.txt, 5186-v9.txt, HBASE-5186.D1461.1.patch, HBASE-5186.D1461.2.patch, HBASE-5186.D1461.3.patch, HBASE-5186.D1461.4.patch, HBASE-5186.D1461.5.patch, HBASE-5186.D1461.6.patch, HBASE-5186.D1461.7.patch, HBASE-5186.D1461.8.patch It will be useful to have some metrics (queue length, waiting time, processing time ...) similar to Hadoop RPC server. This allows us to monitor system health also provide a tool to diagnose the problem where thrift calls are slow. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5304) Pluggable split key policy
[ https://issues.apache.org/jira/browse/HBASE-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1319#comment-1319 ] Hudson commented on HBASE-5304: --- Integrated in HBase-TRUNK #2653 (See [https://builds.apache.org/job/HBase-TRUNK/2653/]) HBASE-5304 Pluggable split key policy larsh : Files : * /hbase/trunk/src/docbkx/book.xml * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/PrefixSplitKeyPolicy.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java Pluggable split key policy -- Key: HBASE-5304 URL: https://issues.apache.org/jira/browse/HBASE-5304 Project: HBase Issue Type: Improvement Components: regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 5304-v4.txt, 5304-v5.txt, 5304-v6.txt, 5304-v7.txt We need a way to specify custom policies to determine split keys. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4658) Put attributes are not exposed via the ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1318#comment-1318 ] Hudson commented on HBASE-4658: --- Integrated in HBase-TRUNK #2653 (See [https://builds.apache.org/job/HBase-TRUNK/2653/]) HBASE-4658 Put attributes are not exposed via the ThriftServer (Dhruba) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java * /hbase/trunk/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java Put attributes are not exposed via the ThriftServer --- Key: HBASE-4658 URL: https://issues.apache.org/jira/browse/HBASE-4658 Project: HBase Issue Type: Bug Components: thrift Reporter: dhruba borthakur Assignee: dhruba borthakur Attachments: D1563.1.patch, D1563.1.patch, D1563.1.patch, D1563.2.patch, D1563.2.patch, D1563.2.patch, D1563.3.patch, D1563.3.patch, D1563.3.patch, ThriftPutAttributes1.txt The Put api also takes in a bunch of arbitrary attributes that an application can use to associate metadata with each put operation. This is not exposed via Thrift. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5318) Support Eclipse Indigo
[ https://issues.apache.org/jira/browse/HBASE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1320#comment-1320 ] Hudson commented on HBASE-5318: --- Integrated in HBase-TRUNK #2653 (See [https://builds.apache.org/job/HBase-TRUNK/2653/]) HBASE-5318 Support Eclipse Indigo (Jesse Yates) larsh : Files : * /hbase/trunk/pom.xml Support Eclipse Indigo --- Key: HBASE-5318 URL: https://issues.apache.org/jira/browse/HBASE-5318 Project: HBase Issue Type: Improvement Components: build Affects Versions: 0.94.0 Environment: Eclipse Indigo (1.4.1) which includes m2eclipse (1.0 SR1). Reporter: Jesse Yates Assignee: Jesse Yates Priority: Minor Labels: maven Attachments: mvn_HBASE-5318_r0.patch The current 'standard' release of Eclipse (indigo) comes with m2eclipse installed. However, as of m2e v1.0, interesting lifecycle phases are now handled via a 'connector'. However, several of the plugins we use don't support connectors. This means that eclipse bails out and won't build the project or view it as 'working' even though it builds just fine from the the command line. Since Eclipse is one of the major java IDEs and that Indigo has been around for a while, we should make it easy to for new devs to pick up the code and for older devs to upgrade painlessly. The original build should not be modified in any significant way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5318) Support Eclipse Indigo
[ https://issues.apache.org/jira/browse/HBASE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200324#comment-13200324 ] Hudson commented on HBASE-5318: --- Integrated in HBase-TRUNK #2654 (See [https://builds.apache.org/job/HBase-TRUNK/2654/]) HBASE-5318 Support Eclipse Indigo (Jesse Yates) REVERT HBASE-5318 Support Eclipse Indigo tedyu : Files : * /hbase/trunk/pom.xml larsh : Files : * /hbase/trunk/pom.xml Support Eclipse Indigo --- Key: HBASE-5318 URL: https://issues.apache.org/jira/browse/HBASE-5318 Project: HBase Issue Type: Improvement Components: build Affects Versions: 0.94.0 Environment: Eclipse Indigo (1.4.1) which includes m2eclipse (1.0 SR1). Reporter: Jesse Yates Assignee: Jesse Yates Priority: Minor Labels: maven Attachments: mvn_HBASE-5318_r0.patch, mvn_HBASE-5318_r1.patch The current 'standard' release of Eclipse (indigo) comes with m2eclipse installed. However, as of m2e v1.0, interesting lifecycle phases are now handled via a 'connector'. However, several of the plugins we use don't support connectors. This means that eclipse bails out and won't build the project or view it as 'working' even though it builds just fine from the the command line. Since Eclipse is one of the major java IDEs and that Indigo has been around for a while, we should make it easy to for new devs to pick up the code and for older devs to upgrade painlessly. The original build should not be modified in any significant way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4658) Put attributes are not exposed via the ThriftServer
[ https://issues.apache.org/jira/browse/HBASE-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200340#comment-13200340 ] Hudson commented on HBASE-4658: --- Integrated in HBase-TRUNK-security #100 (See [https://builds.apache.org/job/HBase-TRUNK-security/100/]) HBASE-4658 Put attributes are not exposed via the ThriftServer (Dhruba) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java * /hbase/trunk/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java Put attributes are not exposed via the ThriftServer --- Key: HBASE-4658 URL: https://issues.apache.org/jira/browse/HBASE-4658 Project: HBase Issue Type: Bug Components: thrift Reporter: dhruba borthakur Assignee: dhruba borthakur Attachments: D1563.1.patch, D1563.1.patch, D1563.1.patch, D1563.2.patch, D1563.2.patch, D1563.2.patch, D1563.3.patch, D1563.3.patch, D1563.3.patch, ThriftPutAttributes1.txt The Put api also takes in a bunch of arbitrary attributes that an application can use to associate metadata with each put operation. This is not exposed via Thrift. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5318) Support Eclipse Indigo
[ https://issues.apache.org/jira/browse/HBASE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200341#comment-13200341 ] Hudson commented on HBASE-5318: --- Integrated in HBase-TRUNK-security #100 (See [https://builds.apache.org/job/HBase-TRUNK-security/100/]) HBASE-5318 Support Eclipse Indigo (Jesse Yates) REVERT HBASE-5318 Support Eclipse Indigo tedyu : Files : * /hbase/trunk/pom.xml larsh : Files : * /hbase/trunk/pom.xml Support Eclipse Indigo --- Key: HBASE-5318 URL: https://issues.apache.org/jira/browse/HBASE-5318 Project: HBase Issue Type: Improvement Components: build Affects Versions: 0.94.0 Environment: Eclipse Indigo (1.4.1) which includes m2eclipse (1.0 SR1). Reporter: Jesse Yates Assignee: Jesse Yates Priority: Minor Labels: maven Attachments: mvn_HBASE-5318_r0.patch, mvn_HBASE-5318_r1.patch The current 'standard' release of Eclipse (indigo) comes with m2eclipse installed. However, as of m2e v1.0, interesting lifecycle phases are now handled via a 'connector'. However, several of the plugins we use don't support connectors. This means that eclipse bails out and won't build the project or view it as 'working' even though it builds just fine from the the command line. Since Eclipse is one of the major java IDEs and that Indigo has been around for a while, we should make it easy to for new devs to pick up the code and for older devs to upgrade painlessly. The original build should not be modified in any significant way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5212) Fix test TestTableMapReduce against 0.23.
[ https://issues.apache.org/jira/browse/HBASE-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200636#comment-13200636 ] Hudson commented on HBASE-5212: --- Integrated in HBase-0.92-security #90 (See [https://builds.apache.org/job/HBase-0.92-security/90/]) HBASE-5212 Fix test TestTableMapReduce against 0.23 (Ted and Gregory) jmhsieh : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/pom.xml * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java Fix test TestTableMapReduce against 0.23. - Key: HBASE-5212 URL: https://issues.apache.org/jira/browse/HBASE-5212 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Mahadev konar Assignee: Gregory Chanan Fix For: 0.94.0, 0.92.1 Attachments: 5212-v2.txt, HBASE-5212-v3.patch, HBASE-5212.patch As reported by Andrew on the hadoop mailing list, mvn -Dhadoop.profile=23 clean test -Dtest=org.apache.hadoop.hbase.mapreduce.TestTableMapReduce fails on 0.92 branch. There are minor changes to HBase poms required to fix that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5282) Possible file handle leak with truncated HLog file.
[ https://issues.apache.org/jira/browse/HBASE-5282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200635#comment-13200635 ] Hudson commented on HBASE-5282: --- Integrated in HBase-0.92-security #90 (See [https://builds.apache.org/job/HBase-0.92-security/90/]) HBASE-5282 Possible file handle leak with truncated HLog file jmhsieh : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java Possible file handle leak with truncated HLog file. --- Key: HBASE-5282 URL: https://issues.apache.org/jira/browse/HBASE-5282 Project: HBase Issue Type: Bug Affects Versions: 0.94.0, 0.90.5, 0.92.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Fix For: 0.94.0, 0.92.1 Attachments: hbase-5282.patch, hbase-5282.v2.patch When debugging hbck, found that the code responsible for this exception can leak open file handles. {code} 12/01/15 05:58:11 INFO regionserver.HRegion: Replaying edits from hdfs://haus01. sf.cloudera.com:56020/hbase-jon/test5/98a1e7255731aae44b3836641840113e/recovered .edits/3211315; minSequenceid=3214658 12/01/15 05:58:11 ERROR handler.OpenRegionHandler: Failed open of region=test5,8 \x90\x00\x00\x00\x00\x00\x00/05_0,1326597390073.98a1e7255731aae44b3836641840 113e. java.io.EOFException at java.io.DataInputStream.readByte(DataInputStream.java:250) at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299) at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320) at org.apache.hadoop.io.Text.readString(Text.java:400) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1437) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1424) at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1419) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:57) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:158) at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:572) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:1940) at org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:1896) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:366) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2661) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5271) Result.getValue and Result.getColumnLatest return the wrong column.
[ https://issues.apache.org/jira/browse/HBASE-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200638#comment-13200638 ] Hudson commented on HBASE-5271: --- Integrated in HBase-0.92-security #90 (See [https://builds.apache.org/job/HBase-0.92-security/90/]) HBASE-5271 Result.getValue and Result.getColumnLatest return the wrong column (Ghais Issa) tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java Result.getValue and Result.getColumnLatest return the wrong column. --- Key: HBASE-5271 URL: https://issues.apache.org/jira/browse/HBASE-5271 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.5 Reporter: Ghais Issa Assignee: Ghais Issa Fix For: 0.94.0, 0.90.7, 0.92.1 Attachments: 5271-90.txt, 5271-v2.txt, fixKeyValueMatchingColumn.diff, testGetValue.diff In the following example result.getValue returns the wrong column KeyValue kv = new KeyValue(Bytes.toBytes(r), Bytes.toBytes(24), Bytes.toBytes(2), Bytes.toBytes(7L)); Result result = new Result(new KeyValue[] { kv }); System.out.println(Bytes.toLong(result.getValue(Bytes.toBytes(2), Bytes.toBytes(2; //prints 7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5307) Unable to gracefully decommission a node because of script error
[ https://issues.apache.org/jira/browse/HBASE-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13200637#comment-13200637 ] Hudson commented on HBASE-5307: --- Integrated in HBase-0.92-security #90 (See [https://builds.apache.org/job/HBase-0.92-security/90/]) HBASE-5307 Unable to gracefully decommission a node because of script error stack : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/bin/region_mover.rb Unable to gracefully decommission a node because of script error Key: HBASE-5307 URL: https://issues.apache.org/jira/browse/HBASE-5307 Project: HBase Issue Type: Bug Components: scripts Affects Versions: 0.92.0 Reporter: YiFeng Jiang Fix For: 0.92.1 Attachments: region_mover_HBASE-5307-0.92.patch Unable to gracefully decommission a node because NameError occurred in region_mover.rb {code} $ bin/graceful_stop.sh ip-10-160-226-84.us-west-1.compute.internal ... NameError: no constructorfor arguments (org.jruby.RubyString) on Java::OrgApacheHadoopHbase::HServerAddress available overloads: (org.apache.hadoop.hbase.HServerAddress) (java.net.InetSocketAddress) getRegions at /usr/local/hbase/current/bin/region_mover.rb:254 unloadRegions at /usr/local/hbase/current/bin/region_mover.rb:314 (root) at /usr/local/hbase/current/bin/region_mover.rb:430 Unloaded ip-10-160-226-84.us-west-1.compute.internal region(s) ip-10-160-226-84.us-west-1.compute.internal: stopping regionserver.. {code} The reason is the region_mover.rb calls wrong HBase APIs to try to establish a connection to the region server. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5267) Add a configuration to disable the slab cache by default
[ https://issues.apache.org/jira/browse/HBASE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203011#comment-13203011 ] Hudson commented on HBASE-5267: --- Integrated in HBase-0.92 #272 (See [https://builds.apache.org/job/HBase-0.92/272/]) HBASE-5267 Add a configuration to disable the slab cache by default (Li Pi) tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/conf/hbase-env.sh * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java * /hbase/branches/0.92/src/main/resources/hbase-default.xml Add a configuration to disable the slab cache by default Key: HBASE-5267 URL: https://issues.apache.org/jira/browse/HBASE-5267 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Jean-Daniel Cryans Assignee: Li Pi Priority: Blocker Fix For: 0.94.0, 0.92.1 Attachments: 5267.txt, 5267v2.txt, 5267v3.txt, 5267v4.txt From what I commented at the tail of HBASE-4027: {quote} I changed the release note, the patch doesn't have a hbase.offheapcachesize configuration and it's enabled as soon as you set -XX:MaxDirectMemorySize (which is actually a big problem when you consider this: http://hbase.apache.org/book.html#trouble.client.oome.directmemory.leak). {quote} We need to add hbase.offheapcachesize and set it to false by default. Marking as a blocker for 0.92.1 and assigning to Li Pi at Todd's request. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5267) Add a configuration to disable the slab cache by default
[ https://issues.apache.org/jira/browse/HBASE-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203294#comment-13203294 ] Hudson commented on HBASE-5267: --- Integrated in HBase-TRUNK-security #101 (See [https://builds.apache.org/job/HBase-TRUNK-security/101/]) HBASE-5267 Add a configuration to disable the slab cache by default (Li Pi) tedyu : Files : * /hbase/trunk/conf/hbase-env.sh * /hbase/trunk/src/docbkx/upgrading.xml * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java * /hbase/trunk/src/main/resources/hbase-default.xml Add a configuration to disable the slab cache by default Key: HBASE-5267 URL: https://issues.apache.org/jira/browse/HBASE-5267 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Jean-Daniel Cryans Assignee: Li Pi Priority: Blocker Fix For: 0.94.0, 0.92.1 Attachments: 5267.txt, 5267v2.txt, 5267v3.txt, 5267v4.txt From what I commented at the tail of HBASE-4027: {quote} I changed the release note, the patch doesn't have a hbase.offheapcachesize configuration and it's enabled as soon as you set -XX:MaxDirectMemorySize (which is actually a big problem when you consider this: http://hbase.apache.org/book.html#trouble.client.oome.directmemory.leak). {quote} We need to add hbase.offheapcachesize and set it to false by default. Marking as a blocker for 0.92.1 and assigning to Li Pi at Todd's request. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5348) Constraint configuration loaded with bloat
[ https://issues.apache.org/jira/browse/HBASE-5348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203296#comment-13203296 ] Hudson commented on HBASE-5348: --- Integrated in HBase-TRUNK-security #101 (See [https://builds.apache.org/job/HBase-TRUNK-security/101/]) HBASE-5348 Constraint configuration loaded with bloat stack : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/constraint/Constraints.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/constraint/CheckConfigurationConstraint.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraints.java Constraint configuration loaded with bloat -- Key: HBASE-5348 URL: https://issues.apache.org/jira/browse/HBASE-5348 Project: HBase Issue Type: Bug Components: coprocessors, regionserver Affects Versions: 0.94.0 Reporter: Jesse Yates Assignee: Jesse Yates Priority: Minor Fix For: 0.94.0 Attachments: java_HBASE-5348.patch, java_HBASE-5348.patch Constraints load the configuration but don't load the 'correct' configuration, but instead instantiate the default configuration (via new Configuration). It should just be Configuration(false) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3850) Log more details when a scanner lease expires
[ https://issues.apache.org/jira/browse/HBASE-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203295#comment-13203295 ] Hudson commented on HBASE-3850: --- Integrated in HBase-TRUNK-security #101 (See [https://builds.apache.org/job/HBase-TRUNK-security/101/]) HBASE-3850 Log more details when a scanner lease expires (Darren Haas) tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java Log more details when a scanner lease expires - Key: HBASE-3850 URL: https://issues.apache.org/jira/browse/HBASE-3850 Project: HBase Issue Type: Improvement Components: regionserver Reporter: Benoit Sigoure Assignee: Darren Haas Priority: Critical Fix For: 0.94.0 Attachments: 3850-v3.txt, HBASE-3850.trunk.v1.patch, HBASE-3850.trunk.v2.patch The message logged by the RegionServer when a Scanner lease expires isn't as useful as it could be. {{Scanner 4765412385779771089 lease expired}} - most clients don't log their scanner ID, so it's really hard to figure out what was going on. I think it would be useful to at least log the name of the region on which the Scanner was open, and it would be great to have the ip:port of the client that had that lease too. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5345) CheckAndPut doesn't work when value is empty byte[]
[ https://issues.apache.org/jira/browse/HBASE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13203858#comment-13203858 ] Hudson commented on HBASE-5345: --- Integrated in HBase-0.92 #273 (See [https://builds.apache.org/job/HBase-0.92/273/]) HBASE-5345 CheckAndPut doesn't work when value is empty byte[] (Evert Arckens) tedyu : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.92/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java CheckAndPut doesn't work when value is empty byte[] --- Key: HBASE-5345 URL: https://issues.apache.org/jira/browse/HBASE-5345 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Evert Arckens Assignee: Evert Arckens Fix For: 0.94.0, 0.92.1 Attachments: 5345-v2.txt, 5345.txt, checkAndMutateEmpty-HBASE-5345.patch When a value contains an empty byte[] and then a checkAndPut is performed with an empty byte[] , the operation will fail. For example: Put put = new Put(row1); put.add(fam1, qf1, new byte[0]); table.put(put); put = new Put(row1); put.add(fam1, qf1, val1); table.checkAndPut(row1, fam1, qf1, new byte[0], put); --- false I think this is related to HBASE-3793 and HBASE-3468. Note that you will also get into this situation when first putting a null value ( put.add(fam1,qf1,null) ), as this value will then be regarded and returned as an empty byte[] upon a get. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5221) bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout
[ https://issues.apache.org/jira/browse/HBASE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204283#comment-13204283 ] Hudson commented on HBASE-5221: --- Integrated in HBase-TRUNK-security #102 (See [https://builds.apache.org/job/HBase-TRUNK-security/102/]) HBASE-5221 bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout -- REVERTED HBASE-5221 bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout stack : Files : * /hbase/trunk/bin/hbase stack : Files : * /hbase/trunk/bin/hbase bin/hbase script doesn't look for Hadoop jars in the right place in trunk layout Key: HBASE-5221 URL: https://issues.apache.org/jira/browse/HBASE-5221 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Todd Lipcon Assignee: Jimmy Xiang Fix For: 0.94.0 Attachments: hbase-5221.txt Running against an 0.24.0-SNAPSHOT hadoop: ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-common*.jar: No such file or directory ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-hdfs*.jar: No such file or directory ls: cannot access /home/todd/ha-demo/hadoop-0.24.0-SNAPSHOT/hadoop-mapred*.jar: No such file or directory The jars are rooted deeper in the heirarchy. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5298) Add thrift metrics to thrift2
[ https://issues.apache.org/jira/browse/HBASE-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204284#comment-13204284 ] Hudson commented on HBASE-5298: --- Integrated in HBase-TRUNK-security #102 (See [https://builds.apache.org/job/HBase-TRUNK-security/102/]) HBASE-5298 Add thrift metrics to thrift2 tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java Add thrift metrics to thrift2 - Key: HBASE-5298 URL: https://issues.apache.org/jira/browse/HBASE-5298 Project: HBase Issue Type: Improvement Components: metrics, thrift Reporter: Scott Chen Assignee: Scott Chen Fix For: 0.94.0 Attachments: 5298-v3.txt, HBASE-5298.D1629.1.patch, HBASE-5298.D1629.2.patch, HBASE-5298.D1629.3.patch, HBASE-5298.D1629.4.patch We have added thrift metrics collection in HBASE-5186. It will be good to have them in thrift2 as well. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5345) CheckAndPut doesn't work when value is empty byte[]
[ https://issues.apache.org/jira/browse/HBASE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204286#comment-13204286 ] Hudson commented on HBASE-5345: --- Integrated in HBase-TRUNK-security #102 (See [https://builds.apache.org/job/HBase-TRUNK-security/102/]) HBASE-5345 CheckAndPut doesn't work when value is empty byte[] (Evert Arckens) tedyu : Files : * /hbase/trunk/CHANGES.txt * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java CheckAndPut doesn't work when value is empty byte[] --- Key: HBASE-5345 URL: https://issues.apache.org/jira/browse/HBASE-5345 Project: HBase Issue Type: Bug Affects Versions: 0.92.0 Reporter: Evert Arckens Assignee: Evert Arckens Fix For: 0.94.0, 0.92.1 Attachments: 5345-v2.txt, 5345.txt, checkAndMutateEmpty-HBASE-5345.patch When a value contains an empty byte[] and then a checkAndPut is performed with an empty byte[] , the operation will fail. For example: Put put = new Put(row1); put.add(fam1, qf1, new byte[0]); table.put(put); put = new Put(row1); put.add(fam1, qf1, val1); table.checkAndPut(row1, fam1, qf1, new byte[0], put); --- false I think this is related to HBASE-3793 and HBASE-3468. Note that you will also get into this situation when first putting a null value ( put.add(fam1,qf1,null) ), as this value will then be regarded and returned as an empty byte[] upon a get. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5229) Provide basic building blocks for multi-row local transactions.
[ https://issues.apache.org/jira/browse/HBASE-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204285#comment-13204285 ] Hudson commented on HBASE-5229: --- Integrated in HBase-TRUNK-security #102 (See [https://builds.apache.org/job/HBase-TRUNK-security/102/]) HBASE-5229 Provide basic building blocks for 'multi-row' local transactions. larsh : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/coprocessor/MultiRowMutationEndpoint.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/coprocessor/MultiRowMutationProtocol.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java Provide basic building blocks for multi-row local transactions. - Key: HBASE-5229 URL: https://issues.apache.org/jira/browse/HBASE-5229 Project: HBase Issue Type: New Feature Components: client, regionserver Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0 Attachments: 5229-endpoint.txt, 5229-final.txt, 5229-multiRow-v2.txt, 5229-multiRow.txt, 5229-seekto-v2.txt, 5229-seekto.txt, 5229.txt In the final iteration, this issue provides a generalized, public mutateRowsWithLocks method on HRegion, that can be used by coprocessors to implement atomic operations efficiently. Coprocessors are already region aware, which makes this is a good pairing of APIs. This feature is by design not available to the client via the HTable API. It took a long time to arrive at this and I apologize for the public exposure of my (erratic in retrospect) thought processes. Was: HBase should provide basic building blocks for multi-row local transactions. Local means that we do this by co-locating the data. Global (cross region) transactions are not discussed here. After a bit of discussion two solutions have emerged: 1. Keep the row-key for determining grouping and location and allow efficient intra-row scanning. A client application would then model tables as HBase-rows. 2. Define a prefix-length in HTableDescriptor that defines a grouping of rows. Regions will then never be split inside a grouping prefix. #1 is true to the current storage paradigm of HBase. #2 is true to the current client side API. I will explore these two with sample patches here. Was: As discussed (at length) on the dev mailing list with the HBASE-3584 and HBASE-5203 committed, supporting atomic cross row transactions within a region becomes simple. I am aware of the hesitation about the usefulness of this feature, but we have to start somewhere. Let's use this jira for discussion, I'll attach a patch (with tests) momentarily to make this concrete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5288) Security source code dirs missing from 0.92.0 release tarballs.
[ https://issues.apache.org/jira/browse/HBASE-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204876#comment-13204876 ] Hudson commented on HBASE-5288: --- Integrated in HBase-0.92 #275 (See [https://builds.apache.org/job/HBase-0.92/275/]) HBASE-5288 Security source code dirs missing from 0.92.0 release tarballs jmhsieh : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/assembly/all.xml Security source code dirs missing from 0.92.0 release tarballs. --- Key: HBASE-5288 URL: https://issues.apache.org/jira/browse/HBASE-5288 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.94.0, 0.92.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Priority: Blocker Fix For: 0.94.0, 0.92.1 Attachments: hbase-5288.patch The release tarballs have a compiled version of the hbase jars and the security tarball seems to have the compiled security bits. However, the source code and resources for security implementation are missing from the release tarballs in both distributions. They should be included in both. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5367) [book] small formatting changes to compaction description in Arch/Regions/Store
[ https://issues.apache.org/jira/browse/HBASE-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204903#comment-13204903 ] Hudson commented on HBASE-5367: --- Integrated in HBase-TRUNK-security #103 (See [https://builds.apache.org/job/HBase-TRUNK-security/103/]) hbase-5367 book.xml - this time, really fixing the default compaction.min.size hbase-5367. book.xml - minor formatting in Arch/Region/Store compaction description [book] small formatting changes to compaction description in Arch/Regions/Store --- Key: HBASE-5367 URL: https://issues.apache.org/jira/browse/HBASE-5367 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: book_hbase_5367.xml.patch, book_hbase_5367_2.xml.patch Fixing a few small-but-important things that came out of a post-commit comment in HBASE-5365 book.xml * corrected default region flush size (it's actually 64mb) * removed trailing 'F' in a ratio discussion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5365) [book] adding description of compaction file selection to refGuide in Arch/Regions/Store
[ https://issues.apache.org/jira/browse/HBASE-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204904#comment-13204904 ] Hudson commented on HBASE-5365: --- Integrated in HBase-TRUNK-security #103 (See [https://builds.apache.org/job/HBase-TRUNK-security/103/]) hbase-5365. book - Arch/Region/Store adding description of compaction file selection [book] adding description of compaction file selection to refGuide in Arch/Regions/Store Key: HBASE-5365 URL: https://issues.apache.org/jira/browse/HBASE-5365 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Attachments: docbkx_hbase_5365.patch book.xml * adding description of compaction selection algorithm with examples (based on existing unit tests) * also added a few links to the compaction section from other places in the book that already mention compaction. configuration.xml * added link to compaction section from the entry that discusses configuring major compaction interval. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5330) TestCompactSelection - adding 2 test cases to testCompactionRatio
[ https://issues.apache.org/jira/browse/HBASE-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204902#comment-13204902 ] Hudson commented on HBASE-5330: --- Integrated in HBase-TRUNK-security #103 (See [https://builds.apache.org/job/HBase-TRUNK-security/103/]) hbase-5330. Update to TestCompactSelection unit test for selection SF assertions. TestCompactSelection - adding 2 test cases to testCompactionRatio - Key: HBASE-5330 URL: https://issues.apache.org/jira/browse/HBASE-5330 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: TestCompactSelection_hbase_5330.java.patch, TestCompactSelection_hbase_5330_v2.java.patch There were three existing assertions in TestCompactSelection testCompactionRatio that did max # of files assertions... {code} assertEquals(maxFiles, store.compactSelection(sfCreate(7,6,5,4,3,2,1)).getFilesToCompact().size()); {code} ... and for references ... {code} assertEquals(maxFiles, store.compactSelection(sfCreate(true, 7,6,5,4,3,2,1)).getFilesToCompact().size()); {code} ... but they didn't assert against which StoreFiles got selected. While the number of StoreFiles is the same, the files selected are actually different, and I thought that there should be explicit assertions showing that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5288) Security source code dirs missing from 0.92.0 release tarballs.
[ https://issues.apache.org/jira/browse/HBASE-5288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13204905#comment-13204905 ] Hudson commented on HBASE-5288: --- Integrated in HBase-TRUNK-security #103 (See [https://builds.apache.org/job/HBase-TRUNK-security/103/]) HBASE-5288 Security source code dirs missing from 0.92.0 release tarballs jmhsieh : Files : * /hbase/trunk/src/assembly/all.xml Security source code dirs missing from 0.92.0 release tarballs. --- Key: HBASE-5288 URL: https://issues.apache.org/jira/browse/HBASE-5288 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.94.0, 0.92.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Priority: Blocker Fix For: 0.94.0, 0.92.1 Attachments: hbase-5288.patch The release tarballs have a compiled version of the hbase jars and the security tarball seems to have the compiled security bits. However, the source code and resources for security implementation are missing from the release tarballs in both distributions. They should be included in both. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5358) HBaseObjectWritable should be able to serialize/deserialize generic arrays
[ https://issues.apache.org/jira/browse/HBASE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13205255#comment-13205255 ] Hudson commented on HBASE-5358: --- Integrated in HBase-TRUNK-security #104 (See [https://builds.apache.org/job/HBase-TRUNK-security/104/]) HBASE-5358 HBaseObjectWritable should be able to serialize/deserialize generic arrays tedyu : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java * /hbase/trunk/src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java HBaseObjectWritable should be able to serialize/deserialize generic arrays -- Key: HBASE-5358 URL: https://issues.apache.org/jira/browse/HBASE-5358 Project: HBase Issue Type: Improvement Components: coprocessors, io Affects Versions: 0.94.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.94.0 Attachments: HBASE-5358_v3.patch HBaseObjectWritable can encode Writable[]'s but, but cannot encode A[] where A extends Writable. This becomes an issue for example when adding a coprocessor method which takes A[] (see HBASE-5352). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5330) TestCompactSelection - adding 2 test cases to testCompactionRatio
[ https://issues.apache.org/jira/browse/HBASE-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13205337#comment-13205337 ] Hudson commented on HBASE-5330: --- Integrated in HBase-TRUNK #2656 (See [https://builds.apache.org/job/HBase-TRUNK/2656/]) hbase-5330. Update to TestCompactSelection unit test for selection SF assertions. TestCompactSelection - adding 2 test cases to testCompactionRatio - Key: HBASE-5330 URL: https://issues.apache.org/jira/browse/HBASE-5330 Project: HBase Issue Type: Improvement Reporter: Doug Meil Assignee: Doug Meil Priority: Minor Attachments: TestCompactSelection_hbase_5330.java.patch, TestCompactSelection_hbase_5330_v2.java.patch There were three existing assertions in TestCompactSelection testCompactionRatio that did max # of files assertions... {code} assertEquals(maxFiles, store.compactSelection(sfCreate(7,6,5,4,3,2,1)).getFilesToCompact().size()); {code} ... and for references ... {code} assertEquals(maxFiles, store.compactSelection(sfCreate(true, 7,6,5,4,3,2,1)).getFilesToCompact().size()); {code} ... but they didn't assert against which StoreFiles got selected. While the number of StoreFiles is the same, the files selected are actually different, and I thought that there should be explicit assertions showing that. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira