[jira] [Created] (HBASE-5781) Zookeeper session got closed while trying to assign the region to RS using hbck -fix
Zookeeper session got closed while trying to assign the region to RS using hbck -fix Key: HBASE-5781 URL: https://issues.apache.org/jira/browse/HBASE-5781 Project: HBase Issue Type: Bug Components: hbck Reporter: Kristam Subba Swathi After running the hbck in the cluster ,it is found that one region is not assigned So the hbck -fix is used to fix this But the assignment didnt happen since the zookeeper session is closed Please find the attached trace for more details - Trying to fix unassigned region... 12/04/03 11:02:57 INFO util.HBaseFsckRepair: Region still in transition, waiting for it to become assigned: {NAME = 'ufdr,002300,179123498.00871fbd7583512e12c4eb38e900be8d.', STARTKEY = '002300', ENDKEY = '002311', ENCODED = 00871fbd7583512e12c4eb38e900be8d,} 12/04/03 11:02:58 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x236738a263a 12/04/03 11:02:58 INFO zookeeper.ZooKeeper: Session: 0x236738a263a closed ERROR: Region { meta = ufdr,010444,179123857.01594219211d0035b9586f98954462e1., hdfs = hdfs://10.18.40.25:9000/hbase/ufdr/01594219211d0035b9586f98954462e1, deployed = } not deployed on any region server. Trying to fix unassigned region... 12/04/03 11:02:58 INFO zookeeper.ClientCnxn: EventThread shut down 12/04/03 11:02:58 WARN zookeeper.ZKUtil: hconnection-0x236738a263a Unable to set watcher on znode (/hbase) org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:150) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:263) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.checkIfBaseNodeAvailable(ZooKeeperNodeTracker.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:695) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:626) at org.apache.hadoop.hbase.client.HBaseAdmin.getMaster(HBaseAdmin.java:211) at org.apache.hadoop.hbase.client.HBaseAdmin.assign(HBaseAdmin.java:1325) at org.apache.hadoop.hbase.util.HBaseFsckRepair.forceOfflineInZK(HBaseFsckRepair.java:109) at org.apache.hadoop.hbase.util.HBaseFsckRepair.fixUnassigned(HBaseFsckRepair.java:92) at org.apache.hadoop.hbase.util.HBaseFsck.tryAssignmentRepair(HBaseFsck.java:1235) at org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1351) at org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1114) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:356) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:375) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2894) 12/04/03 11:02:58 ERROR zookeeper.ZooKeeperWatcher: hconnection-0x236738a263a Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:150) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:263) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.checkIfBaseNodeAvailable(ZooKeeperNodeTracker.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:695) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:626) at org.apache.hadoop.hbase.client.HBaseAdmin.getMaster(HBaseAdmin.java:211) at org.apache.hadoop.hbase.client.HBaseAdmin.assign(HBaseAdmin.java:1325) at org.apache.hadoop.hbase.util.HBaseFsckRepair.forceOfflineInZK(HBaseFsckRepair.java:109) at org.apache.hadoop.hbase.util.HBaseFsckRepair.fixUnassigned(HBaseFsckRepair.java:92) at org.apache.hadoop.hbase.util.HBaseFsck.tryAssignmentRepair(HBaseFsck.java:1235) at org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1351) at org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1114) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:356) at
[jira] [Commented] (HBASE-4379) [hbck] Does not complain about tables with no end region [Z,]
[ https://issues.apache.org/jira/browse/HBASE-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253152#comment-13253152 ] Jonathan Hsieh commented on HBASE-4379: --- Yes, we should. Do you want to take a stab at it? (If so, just take the jira) I need knock off some other things on my plate before I can get back to this. [hbck] Does not complain about tables with no end region [Z,] - Key: HBASE-4379 URL: https://issues.apache.org/jira/browse/HBASE-4379 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.5, 0.92.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: 0001-HBASE-4379-hbck-does-not-complain-about-tables-with-.patch, hbase-4379.v2.patch hbck does not detect or have an error condition when the last region of a table is missing (end key != ''). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5360) [uberhbck] Add options for how to handle offline split parents.
[ https://issues.apache.org/jira/browse/HBASE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253154#comment-13253154 ] Anoop Sam John commented on HBASE-5360: --- @Jon HBASE-5719 Handling this issue? [uberhbck] Add options for how to handle offline split parents. Key: HBASE-5360 URL: https://issues.apache.org/jira/browse/HBASE-5360 Project: HBase Issue Type: Improvement Components: hbck Affects Versions: 0.90.7, 0.92.1, 0.94.0 Reporter: Jonathan Hsieh In a recent case, we attempted to repair a cluster that suffered from HBASE-4238 that had about 6-7 generations of leftover split data. The hbck repair options in an development version of HBASE-5128 treat HDFS as ground truth but didn't check SPLIT and OFFLINE flags only found in meta. The net effect was that it essentially attempted to merge many regions back into its eldest geneneration's parent's range. More safe guards to prevent mega-merges are being added on HBASE-5128. This issue would automate the handling of the mega-merge avoiding cases such as lingering grandparents. The strategy here would be to add more checks against .META., and perform part of the catalog janitor's responsibilities for lingering grandparents. This would potentially include options to sideline regions, deleting grandparent regions, min size for sidelining, and mechanisms for cleaning .META.. Note: There already exists an mechanism to reload these regions -- the bulk loaded mechanisms in LoadIncrementalHFiles can be used to re-add grandparents (automatically splitting them if necessary) to HBase. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253169#comment-13253169 ] Hadoop QA commented on HBASE-5620: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522531/hbase-5620_v4.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 30 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.mapreduce.TestWALPlayer org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit org.apache.hadoop.hbase.replication.TestMultiSlaveReplication org.apache.hadoop.hbase.regionserver.wal.TestHLog org.apache.hadoop.hbase.replication.TestMasterReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1508//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1508//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1508//console This message is automatically generated. Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5488) Fixed OfflineMetaRepair bug
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-5488: -- Attachment: hbase-5488-v2.patch Updated patch applies to trunk/0.94/0.92 Fixed OfflineMetaRepair bug Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.92.2 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-5488: -- Summary: OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property (was: Fixed OfflineMetaRepair bug ) Changed jira title to be more descriptive OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.92.2 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-5488: -- Resolution: Fixed Fix Version/s: 0.96.0 0.94.0 0.90.7 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) @Anoop -- Thanks for checking up on these (I missed this when it came in) @gaojinchao - Thanks for the useful patch! I've tested the failures and the work for me locally. Committed to 0.90/0.92/0.94/0.96-trunk OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-5781) Zookeeper session got closed while trying to assign the region to RS using hbck -fix
[ https://issues.apache.org/jira/browse/HBASE-5781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh reassigned HBASE-5781: - Assignee: Jonathan Hsieh Zookeeper session got closed while trying to assign the region to RS using hbck -fix Key: HBASE-5781 URL: https://issues.apache.org/jira/browse/HBASE-5781 Project: HBase Issue Type: Bug Components: hbck Reporter: Kristam Subba Swathi Assignee: Jonathan Hsieh After running the hbck in the cluster ,it is found that one region is not assigned So the hbck -fix is used to fix this But the assignment didnt happen since the zookeeper session is closed Please find the attached trace for more details - Trying to fix unassigned region... 12/04/03 11:02:57 INFO util.HBaseFsckRepair: Region still in transition, waiting for it to become assigned: {NAME = 'ufdr,002300,179123498.00871fbd7583512e12c4eb38e900be8d.', STARTKEY = '002300', ENDKEY = '002311', ENCODED = 00871fbd7583512e12c4eb38e900be8d,} 12/04/03 11:02:58 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x236738a263a 12/04/03 11:02:58 INFO zookeeper.ZooKeeper: Session: 0x236738a263a closed ERROR: Region { meta = ufdr,010444,179123857.01594219211d0035b9586f98954462e1., hdfs = hdfs://10.18.40.25:9000/hbase/ufdr/01594219211d0035b9586f98954462e1, deployed = } not deployed on any region server. Trying to fix unassigned region... 12/04/03 11:02:58 INFO zookeeper.ClientCnxn: EventThread shut down 12/04/03 11:02:58 WARN zookeeper.ZKUtil: hconnection-0x236738a263a Unable to set watcher on znode (/hbase) org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:150) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:263) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.checkIfBaseNodeAvailable(ZooKeeperNodeTracker.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:695) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:626) at org.apache.hadoop.hbase.client.HBaseAdmin.getMaster(HBaseAdmin.java:211) at org.apache.hadoop.hbase.client.HBaseAdmin.assign(HBaseAdmin.java:1325) at org.apache.hadoop.hbase.util.HBaseFsckRepair.forceOfflineInZK(HBaseFsckRepair.java:109) at org.apache.hadoop.hbase.util.HBaseFsckRepair.fixUnassigned(HBaseFsckRepair.java:92) at org.apache.hadoop.hbase.util.HBaseFsck.tryAssignmentRepair(HBaseFsck.java:1235) at org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1351) at org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1114) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:356) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:375) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2894) 12/04/03 11:02:58 ERROR zookeeper.ZooKeeperWatcher: hconnection-0x236738a263a Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:150) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:263) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.checkIfBaseNodeAvailable(ZooKeeperNodeTracker.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:695) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:626) at org.apache.hadoop.hbase.client.HBaseAdmin.getMaster(HBaseAdmin.java:211) at org.apache.hadoop.hbase.client.HBaseAdmin.assign(HBaseAdmin.java:1325) at org.apache.hadoop.hbase.util.HBaseFsckRepair.forceOfflineInZK(HBaseFsckRepair.java:109) at org.apache.hadoop.hbase.util.HBaseFsckRepair.fixUnassigned(HBaseFsckRepair.java:92) at
[jira] [Commented] (HBASE-5747) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test
[ https://issues.apache.org/jira/browse/HBASE-5747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253183#comment-13253183 ] Hadoop QA commented on HBASE-5747: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522532/5708v4.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 64 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.mapreduce.TestWALPlayer org.apache.hadoop.hbase.replication.TestMultiSlaveReplication org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit org.apache.hadoop.hbase.replication.TestMasterReplication org.apache.hadoop.hbase.regionserver.wal.TestHLog Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1509//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1509//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1509//console This message is automatically generated. Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test Key: HBASE-5747 URL: https://issues.apache.org/jira/browse/HBASE-5747 Project: HBase Issue Type: Task Reporter: stack Assignee: stack Priority: Blocker Attachments: 5474.txt, 5474v2.txt, 5474v3 (1).txt, 5474v3.txt, 5708v4.txt Forward port as much as we can of Mikhail's hard-won test cleanups over on 0.89 branch Will improve our being able to run unit tests in //. He also found a few bugs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5781) Zookeeper session got closed while trying to assign the region to RS using hbck -fix
[ https://issues.apache.org/jira/browse/HBASE-5781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253184#comment-13253184 ] Jonathan Hsieh commented on HBASE-5781: --- @Kristam What versions are you using? (can you fill out the affects version?) I actually ran into this problem earlier today and have been spending some time investigating. Zookeeper session got closed while trying to assign the region to RS using hbck -fix Key: HBASE-5781 URL: https://issues.apache.org/jira/browse/HBASE-5781 Project: HBase Issue Type: Bug Components: hbck Reporter: Kristam Subba Swathi Assignee: Jonathan Hsieh After running the hbck in the cluster ,it is found that one region is not assigned So the hbck -fix is used to fix this But the assignment didnt happen since the zookeeper session is closed Please find the attached trace for more details - Trying to fix unassigned region... 12/04/03 11:02:57 INFO util.HBaseFsckRepair: Region still in transition, waiting for it to become assigned: {NAME = 'ufdr,002300,179123498.00871fbd7583512e12c4eb38e900be8d.', STARTKEY = '002300', ENDKEY = '002311', ENCODED = 00871fbd7583512e12c4eb38e900be8d,} 12/04/03 11:02:58 INFO client.HConnectionManager$HConnectionImplementation: Closed zookeeper sessionid=0x236738a263a 12/04/03 11:02:58 INFO zookeeper.ZooKeeper: Session: 0x236738a263a closed ERROR: Region { meta = ufdr,010444,179123857.01594219211d0035b9586f98954462e1., hdfs = hdfs://10.18.40.25:9000/hbase/ufdr/01594219211d0035b9586f98954462e1, deployed = } not deployed on any region server. Trying to fix unassigned region... 12/04/03 11:02:58 INFO zookeeper.ClientCnxn: EventThread shut down 12/04/03 11:02:58 WARN zookeeper.ZKUtil: hconnection-0x236738a263a Unable to set watcher on znode (/hbase) org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:150) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:263) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.checkIfBaseNodeAvailable(ZooKeeperNodeTracker.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:695) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:626) at org.apache.hadoop.hbase.client.HBaseAdmin.getMaster(HBaseAdmin.java:211) at org.apache.hadoop.hbase.client.HBaseAdmin.assign(HBaseAdmin.java:1325) at org.apache.hadoop.hbase.util.HBaseFsckRepair.forceOfflineInZK(HBaseFsckRepair.java:109) at org.apache.hadoop.hbase.util.HBaseFsckRepair.fixUnassigned(HBaseFsckRepair.java:92) at org.apache.hadoop.hbase.util.HBaseFsck.tryAssignmentRepair(HBaseFsck.java:1235) at org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1351) at org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1114) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:356) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:375) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2894) 12/04/03 11:02:58 ERROR zookeeper.ZooKeeperWatcher: hconnection-0x236738a263a Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:150) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:263) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.checkIfBaseNodeAvailable(ZooKeeperNodeTracker.java:208) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.checkIfBaseNodeAvailable(HConnectionManager.java:695) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:626) at org.apache.hadoop.hbase.client.HBaseAdmin.getMaster(HBaseAdmin.java:211) at org.apache.hadoop.hbase.client.HBaseAdmin.assign(HBaseAdmin.java:1325) at
[jira] [Commented] (HBASE-5360) [uberhbck] Add options for how to handle offline split parents.
[ https://issues.apache.org/jira/browse/HBASE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253187#comment-13253187 ] Jonathan Hsieh commented on HBASE-5360: --- @Anoop. Partially. HBASE-5719 sidelines potential overlapping regions but doesn't take .META.'s info about if the region is offline into account. It could sideline a live region while it could have been more efficient to sideline an offline region. However, HBASE-5719 was sufficient for the case we were dealing with. Let's make this a placeholder for ways to improve it. Some ideas include: * Taking a region's offline/splitparent state into account if .META. entry is present * Making decisions about sidelining vs merging based on region size and max region size (instead of range) xml properties. * Improving the heuristic used to decide which region are sidelined. [uberhbck] Add options for how to handle offline split parents. Key: HBASE-5360 URL: https://issues.apache.org/jira/browse/HBASE-5360 Project: HBase Issue Type: Improvement Components: hbck Affects Versions: 0.90.7, 0.92.1, 0.94.0 Reporter: Jonathan Hsieh In a recent case, we attempted to repair a cluster that suffered from HBASE-4238 that had about 6-7 generations of leftover split data. The hbck repair options in an development version of HBASE-5128 treat HDFS as ground truth but didn't check SPLIT and OFFLINE flags only found in meta. The net effect was that it essentially attempted to merge many regions back into its eldest geneneration's parent's range. More safe guards to prevent mega-merges are being added on HBASE-5128. This issue would automate the handling of the mega-merge avoiding cases such as lingering grandparents. The strategy here would be to add more checks against .META., and perform part of the catalog janitor's responsibilities for lingering grandparents. This would potentially include options to sideline regions, deleting grandparent regions, min size for sidelining, and mechanisms for cleaning .META.. Note: There already exists an mechanism to reload these regions -- the bulk loaded mechanisms in LoadIncrementalHFiles can be used to re-add grandparents (automatically splitting them if necessary) to HBase. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253189#comment-13253189 ] ramkrishna.s.vasudevan commented on HBASE-5488: --- @Jon Thanks for committing it. Anoop reminded me about this. OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253195#comment-13253195 ] Hadoop QA commented on HBASE-5488: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522535/hbase-5488-v2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.mapreduce.TestWALPlayer org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit org.apache.hadoop.hbase.replication.TestMultiSlaveReplication org.apache.hadoop.hbase.regionserver.wal.TestHLog org.apache.hadoop.hbase.replication.TestMasterReplication Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1510//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1510//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1510//console This message is automatically generated. OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5677) The master never does balance because duplicate openhandled the one region
[ https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253196#comment-13253196 ] xufeng commented on HBASE-5677: --- @Lars Sorry,Something I can not undestand. I think that this issue can be fixed by HBASE-5454. Why we need 5677-proposal.txt patch for it? The master never does balance because duplicate openhandled the one region -- Key: HBASE-5677 URL: https://issues.apache.org/jira/browse/HBASE-5677 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Environment: 0.90 Reporter: xufeng Assignee: xufeng Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: 5677-proposal.txt, 5677-proposal.txt, 5677-proposal.txt, HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, surefire-report_patched_v1.html If region be assigned When the master is doing initialization(before do processFailover),the region will be duplicate openhandled. because the unassigned node in zookeeper will be handled again in AssignmentManager#processFailover() it cause the region in RIT,thus the master never does balance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5677) The master never does balance because duplicate openhandled the one region
[ https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253198#comment-13253198 ] xufeng commented on HBASE-5677: --- should we integrate the HBASE-5454 to 0.90 version? I integrated the HBASE-5454 patch to 0.90 in my cluster,and it can work. The master never does balance because duplicate openhandled the one region -- Key: HBASE-5677 URL: https://issues.apache.org/jira/browse/HBASE-5677 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Environment: 0.90 Reporter: xufeng Assignee: xufeng Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: 5677-proposal.txt, 5677-proposal.txt, 5677-proposal.txt, HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, surefire-report_patched_v1.html If region be assigned When the master is doing initialization(before do processFailover),the region will be duplicate openhandled. because the unassigned node in zookeeper will be handled again in AssignmentManager#processFailover() it cause the region in RIT,thus the master never does balance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253200#comment-13253200 ] Hudson commented on HBASE-5488: --- Integrated in HBase-TRUNK #2751 (See [https://builds.apache.org/job/HBase-TRUNK/2751/]) HBASE-5488 OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property (gaojinchao) (Revision 1325625) Result = SUCCESS jmhsieh : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRepair.java OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253207#comment-13253207 ] Hudson commented on HBASE-5488: --- Integrated in HBase-0.94 #112 (See [https://builds.apache.org/job/HBase-0.94/112/]) HBASE-5488 OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property (gaojinchao) (Revision 1325626) Result = FAILURE jmhsieh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRepair.java OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253219#comment-13253219 ] Jonathan Hsieh commented on HBASE-5488: --- This patch shows test failures that seem unrelated , likely due to HBASE-5778. OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5454) Refuse operations from Admin before master is initialized
[ https://issues.apache.org/jira/browse/HBASE-5454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253245#comment-13253245 ] xufeng commented on HBASE-5454: --- @chunhui Does it need to be added in HMaster#createTable? Refuse operations from Admin before master is initialized - Key: HBASE-5454 URL: https://issues.apache.org/jira/browse/HBASE-5454 Project: HBase Issue Type: Improvement Reporter: chunhui shen Assignee: chunhui shen Fix For: 0.94.0 Attachments: hbase-5454.patch, hbase-5454v2.patch In our testing environment, When master is initializing, we found conflict problems between master#assignAllUserRegions and EnableTable event, causing assigning region throw exception so that master abort itself. We think we'd better refuse operations from Admin, such as CreateTable, EnableTable,etc, It could reduce error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253246#comment-13253246 ] Hudson commented on HBASE-5488: --- Integrated in HBase-0.92 #370 (See [https://builds.apache.org/job/HBase-0.92/370/]) HBASE-5488 OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property (gaojinchao) (Revision 1325627) Result = FAILURE jmhsieh : Files : * /hbase/branches/0.92/CHANGES.txt * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRepair.java OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services
[ https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jieshan Bean updated HBASE-1936: Attachment: HBASE-1936-trunk.txt This patch is just for review. I have checked the code in coprocessor, it can't support the dynamic class during running, though I can refer to the code of how to load jars from HDFS. Plz correct me if I'm wrong. In current approach, will use this classloader as the default one(Plz share your comment if you think different.). I'm still doing more tests for this patch. (I will upload the introductive document next monday. Thank you) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services --- Key: HBASE-1936 URL: https://issues.apache.org/jira/browse/HBASE-1936 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Jieshan Bean Labels: noob Attachments: HBASE-1936-trunk.txt, cp_from_hdfs.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services
[ https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jieshan Bean updated HBASE-1936: Attachment: (was: HBASE-1936-trunk.txt) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services --- Key: HBASE-1936 URL: https://issues.apache.org/jira/browse/HBASE-1936 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Jieshan Bean Labels: noob Attachments: cp_from_hdfs.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services
[ https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jieshan Bean updated HBASE-1936: Attachment: HBASE-1936-trunk(forReview).patch ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services --- Key: HBASE-1936 URL: https://issues.apache.org/jira/browse/HBASE-1936 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Jieshan Bean Labels: noob Attachments: HBASE-1936-trunk(forReview).patch, cp_from_hdfs.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253377#comment-13253377 ] Jimmy Xiang commented on HBASE-5620: I will take a look at the test failures. Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-5782) Not all the regions are getting assigned after the log splitting.
Not all the regions are getting assigned after the log splitting. - Key: HBASE-5782 URL: https://issues.apache.org/jira/browse/HBASE-5782 Project: HBase Issue Type: Bug Components: wal Affects Versions: 0.94.0 Reporter: Gopinathan A Priority: Critical Create a table with 1000 splits, after the region assignemnt, kill the regionserver wich contains META table. Here few regions are missing after the log splitting and region assigment. HBCK report shows multiple region holes are got created. Same scenario was verified mulitple times in 0.92.1, no issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5782) Not all the regions are getting assigned after the log splitting.
[ https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253418#comment-13253418 ] ramkrishna.s.vasudevan commented on HBASE-5782: --- @Lars I was checking this issue. I think it has to be fixed before RC? What you say? Not sure of the root cause still. Not all the regions are getting assigned after the log splitting. - Key: HBASE-5782 URL: https://issues.apache.org/jira/browse/HBASE-5782 Project: HBase Issue Type: Bug Components: wal Affects Versions: 0.94.0 Reporter: Gopinathan A Priority: Critical Create a table with 1000 splits, after the region assignemnt, kill the regionserver wich contains META table. Here few regions are missing after the log splitting and region assigment. HBCK report shows multiple region holes are got created. Same scenario was verified mulitple times in 0.92.1, no issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253454#comment-13253454 ] stack commented on HBASE-5778: -- I backed it out of 0.94 and trunk. Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.94.0, 0.96.0 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5747) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test
[ https://issues.apache.org/jira/browse/HBASE-5747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5747: - Status: Open (was: Patch Available) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test Key: HBASE-5747 URL: https://issues.apache.org/jira/browse/HBASE-5747 Project: HBase Issue Type: Task Reporter: stack Assignee: stack Priority: Blocker Attachments: 5474.txt, 5474v2.txt, 5474v3 (1).txt, 5474v3.txt, 5708v4.txt, 5708v4.txt Forward port as much as we can of Mikhail's hard-won test cleanups over on 0.89 branch Will improve our being able to run unit tests in //. He also found a few bugs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5747) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test
[ https://issues.apache.org/jira/browse/HBASE-5747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5747: - Attachment: 5708v4.txt Retry after revert of wal compression patch Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test Key: HBASE-5747 URL: https://issues.apache.org/jira/browse/HBASE-5747 Project: HBase Issue Type: Task Reporter: stack Assignee: stack Priority: Blocker Attachments: 5474.txt, 5474v2.txt, 5474v3 (1).txt, 5474v3.txt, 5708v4.txt, 5708v4.txt Forward port as much as we can of Mikhail's hard-won test cleanups over on 0.89 branch Will improve our being able to run unit tests in //. He also found a few bugs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5620: - Status: Open (was: Patch Available) Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5620: - Attachment: (was: hbase-5620_v4.patch) Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5773) HtablePool constructor not reading config files in certain cases
[ https://issues.apache.org/jira/browse/HBASE-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5773: - Attachment: different-config-behaviour.90.patch Version I applied to 0.90 branch. HtablePool constructor not reading config files in certain cases Key: HBASE-5773 URL: https://issues.apache.org/jira/browse/HBASE-5773 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.6, 0.92.1, 0.94.1 Reporter: Ioan Eugen Stan Priority: Minor Fix For: 0.92.2, 0.94.0 Attachments: different-config-behaviour.90.patch, different-config-behaviour.patch Creating a HtablePool can issue two behaviour depanding on the constructor called. Case 1: loads the configs from hbase-site public HTablePool() { this(HBaseConfiguration.create(), Integer.MAX_VALUE); } Calling this with null values for Configuration: public HTablePool(final Configuration config, final int maxSize) { this(config, maxSize, null, null); } will issue: public HTablePool(final Configuration config, final int maxSize, final HTableInterfaceFactory tableFactory, PoolType poolType) { // Make a new configuration instance so I can safely cleanup when // done with the pool. this.config = config == null ? new Configuration() : config; which does not read the hbase-site config files as HBaseConfiguration.create() does. I've tracked this problem to all versions of hbase. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5620: - Status: Patch Available (was: Open) Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5747) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test
[ https://issues.apache.org/jira/browse/HBASE-5747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5747: - Status: Patch Available (was: Open) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test Key: HBASE-5747 URL: https://issues.apache.org/jira/browse/HBASE-5747 Project: HBase Issue Type: Task Reporter: stack Assignee: stack Priority: Blocker Attachments: 5474.txt, 5474v2.txt, 5474v3 (1).txt, 5474v3.txt, 5708v4.txt, 5708v4.txt Forward port as much as we can of Mikhail's hard-won test cleanups over on 0.89 branch Will improve our being able to run unit tests in //. He also found a few bugs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5773) HtablePool constructor not reading config files in certain cases
[ https://issues.apache.org/jira/browse/HBASE-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5773: - Attachment: different-config-behaviour.90.patch hmmm... this version actually compiles. This is what I committed. HtablePool constructor not reading config files in certain cases Key: HBASE-5773 URL: https://issues.apache.org/jira/browse/HBASE-5773 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.6, 0.92.1, 0.94.1 Reporter: Ioan Eugen Stan Priority: Minor Fix For: 0.92.2, 0.94.0 Attachments: different-config-behaviour.90.patch, different-config-behaviour.90.patch, different-config-behaviour.patch Creating a HtablePool can issue two behaviour depanding on the constructor called. Case 1: loads the configs from hbase-site public HTablePool() { this(HBaseConfiguration.create(), Integer.MAX_VALUE); } Calling this with null values for Configuration: public HTablePool(final Configuration config, final int maxSize) { this(config, maxSize, null, null); } will issue: public HTablePool(final Configuration config, final int maxSize, final HTableInterfaceFactory tableFactory, PoolType poolType) { // Make a new configuration instance so I can safely cleanup when // done with the pool. this.config = config == null ? new Configuration() : config; which does not read the hbase-site config files as HBaseConfiguration.create() does. I've tracked this problem to all versions of hbase. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5773) HtablePool constructor not reading config files in certain cases
[ https://issues.apache.org/jira/browse/HBASE-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5773: - Fix Version/s: 0.90.7 Committed to 0.90 branch too. HtablePool constructor not reading config files in certain cases Key: HBASE-5773 URL: https://issues.apache.org/jira/browse/HBASE-5773 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.6, 0.92.1, 0.94.1 Reporter: Ioan Eugen Stan Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0 Attachments: different-config-behaviour.90.patch, different-config-behaviour.90.patch, different-config-behaviour.patch Creating a HtablePool can issue two behaviour depanding on the constructor called. Case 1: loads the configs from hbase-site public HTablePool() { this(HBaseConfiguration.create(), Integer.MAX_VALUE); } Calling this with null values for Configuration: public HTablePool(final Configuration config, final int maxSize) { this(config, maxSize, null, null); } will issue: public HTablePool(final Configuration config, final int maxSize, final HTableInterfaceFactory tableFactory, PoolType poolType) { // Make a new configuration instance so I can safely cleanup when // done with the pool. this.config = config == null ? new Configuration() : config; which does not read the hbase-site config files as HBaseConfiguration.create() does. I've tracked this problem to all versions of hbase. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4336) Convert source tree into maven modules
[ https://issues.apache.org/jira/browse/HBASE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253502#comment-13253502 ] Jesse Yates commented on HBASE-4336: Working with Matt Corgan a bit on getting a version compiling across machines ran into a maven issue (see links below) where the project will build fine if doing a 'mvn package', but 'mvn compile' will break if you have two modules wherein one creates a test-jar and the other depends on that jar, even with a test scope. As maven stands now, we can only have one module that has all the test code (or at least all the code for that module), which is a huge pain as a large amount of code uses the minicluster. An alternative design around this issue would be to have an hbase-test module that has all the multi-module/mini-cluster tests, and then each of the modules just has its own unit tests (I proposed something similar a couple months ago, but not for this reason). The other alternative is to actually fix maven. One of maven jiras is marked resolved, but doesn't work on maven 3.0.x. The other has been sitting in limbo for a couple years. I'll try to get the maven issue resolved, but that means we will need to require people to build with the tip of maven (when the change gets released). The TL;DR is that until MNG-2045 gets resolved for mvn 3.0.x, it probably makes sense to just go with with Stack's suggestion of hbase (parent), hbase-core and hbase-assemble for the moment and work in parallel on resolving the maven jira. Thoughts? Maven Jira issues: - jira.codehaus.org/browse/MNG-2045 - http://jira.codehaus.org/browse/MNG-3559) Convert source tree into maven modules -- Key: HBASE-4336 URL: https://issues.apache.org/jira/browse/HBASE-4336 Project: HBase Issue Type: Task Components: build Reporter: Gary Helmling Priority: Critical Fix For: 0.96.0 When we originally converted the build to maven we had a single core module defined, but later reverted this to a module-less build for the sake of simplicity. It now looks like it's time to re-address this, as we have an actual need for modules to: * provide a trimmed down client library that applications can make use of * more cleanly support building against different versions of Hadoop, in place of some of the reflection machinations currently required * incorporate the secure RPC engine that depends on some secure Hadoop classes I propose we start simply by refactoring into two initial modules: * core - common classes and utilities, and client-side code and interfaces * server - master and region server implementations and supporting code This would also lay the groundwork for incorporating the HBase security features that have been developed. Once the module structure is in place, security-related features could then be incorporated into a third module -- security -- after normal review and approval. The security module could then depend on secure Hadoop, without modifying the dependencies of the rest of the HBase code. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5654) [findbugs] Address dodgy bugs
[ https://issues.apache.org/jira/browse/HBASE-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Jindal updated HBASE-5654: --- Fix Version/s: 0.96.0 Labels: patch (was: ) Affects Version/s: 0.96.0 Status: Patch Available (was: Open) Patch submitted for findbugs . Please review and provide comments and suggestions [findbugs] Address dodgy bugs - Key: HBASE-5654 URL: https://issues.apache.org/jira/browse/HBASE-5654 Project: HBase Issue Type: Sub-task Components: scripts Affects Versions: 0.96.0 Reporter: Jonathan Hsieh Assignee: Ashutosh Jindal Labels: patch Fix For: 0.96.0 See https://builds.apache.org/job/PreCommit-HBASE-Build/1313//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html#Warnings_STYLE This may be broken down further. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253513#comment-13253513 ] Hadoop QA commented on HBASE-5620: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522579/hbase-5620_v4.patch against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 30 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestWALPlayer Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1511//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1511//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1511//console This message is automatically generated. Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253522#comment-13253522 ] Hudson commented on HBASE-5778: --- Integrated in HBase-TRUNK #2752 (See [https://builds.apache.org/job/HBase-TRUNK/2752/]) HBASE-5778 Turn on WAL compression by default (Revision 1325801) Result = SUCCESS stack : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.94.0, 0.96.0 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5747) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test
[ https://issues.apache.org/jira/browse/HBASE-5747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253523#comment-13253523 ] Hadoop QA commented on HBASE-5747: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522578/5708v4.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 64 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplication org.apache.hadoop.hbase.mapreduce.TestWALPlayer Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1512//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1512//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1512//console This message is automatically generated. Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test Key: HBASE-5747 URL: https://issues.apache.org/jira/browse/HBASE-5747 Project: HBase Issue Type: Task Reporter: stack Assignee: stack Priority: Blocker Attachments: 5474.txt, 5474v2.txt, 5474v3 (1).txt, 5474v3.txt, 5708v4.txt, 5708v4.txt Forward port as much as we can of Mikhail's hard-won test cleanups over on 0.89 branch Will improve our being able to run unit tests in //. He also found a few bugs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4071) Data GC: Remove all versions TTL EXCEPT the last written version
[ https://issues.apache.org/jira/browse/HBASE-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253526#comment-13253526 ] Hadoop QA commented on HBASE-4071: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12491647/MinVersions-v9.diff against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 12 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1513//console This message is automatically generated. Data GC: Remove all versions TTL EXCEPT the last written version -- Key: HBASE-4071 URL: https://issues.apache.org/jira/browse/HBASE-4071 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Lars Hofhansl Fix For: 0.92.0 Attachments: MinVersions-v9.diff, MinVersions.diff We were chatting today about our backup cluster. What we want is to be able to restore the dataset from any point of time but only within a limited timeframe -- say one week. Thereafter, if the versions are older than one week, rather than as we do with TTL where we let go of all versions older than TTL, instead, let go of all versions EXCEPT the last one written. So, its like versions==1 when TTL one week. We want to allow that if an error is caught within a week of its happening -- user mistakenly removes a critical table -- then we'll be able to restore up the the moment just before catastrophe hit otherwise, we keep one version only. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253529#comment-13253529 ] Jean-Daniel Cryans commented on HBASE-5778: --- Sorry for all the trouble guys, I thought the feature was more tested than that :( Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.94.0, 0.96.0 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253528#comment-13253528 ] Hudson commented on HBASE-5778: --- Integrated in HBase-0.94 #113 (See [https://builds.apache.org/job/HBase-0.94/113/]) HBASE-5778 Turn on WAL compression by default: REVERT (Revision 1325803) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.94.0, 0.96.0 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253538#comment-13253538 ] Jimmy Xiang commented on HBASE-5620: TestWALPlayer passed for me. I didn't have the latest from trunk? Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-5778: - Fix Version/s: (was: 0.94.0) 0.94.1 Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.96.0, 0.94.1 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5677) The master never does balance because duplicate openhandled the one region
[ https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253549#comment-13253549 ] Lars Hofhansl commented on HBASE-5677: -- Arghh... OK. So: * in 0.94+ this is fixed, correct? * you like to backport HBASE-5454 to 0.90 and 0.92, right? So let's close this one then? The master never does balance because duplicate openhandled the one region -- Key: HBASE-5677 URL: https://issues.apache.org/jira/browse/HBASE-5677 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Environment: 0.90 Reporter: xufeng Assignee: xufeng Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: 5677-proposal.txt, 5677-proposal.txt, 5677-proposal.txt, HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, surefire-report_patched_v1.html If region be assigned When the master is doing initialization(before do processFailover),the region will be duplicate openhandled. because the unassigned node in zookeeper will be handled again in AssignmentManager#processFailover() it cause the region in RIT,thus the master never does balance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5654) [findbugs] Address dodgy bugs
[ https://issues.apache.org/jira/browse/HBASE-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253552#comment-13253552 ] Jonathan Hsieh commented on HBASE-5654: --- Hi Ashutosh, I don't see an attachement -- could you attach one that we could take a look at? Thanks! [findbugs] Address dodgy bugs - Key: HBASE-5654 URL: https://issues.apache.org/jira/browse/HBASE-5654 Project: HBase Issue Type: Sub-task Components: scripts Affects Versions: 0.96.0 Reporter: Jonathan Hsieh Assignee: Ashutosh Jindal Labels: patch Fix For: 0.96.0 See https://builds.apache.org/job/PreCommit-HBASE-Build/1313//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html#Warnings_STYLE This may be broken down further. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services
[ https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253564#comment-13253564 ] Zhihong Yu commented on HBASE-1936: --- The patch is of decent size. Please upload to review board. Why does callIsATriggeringClass() use reflection to call isATriggeringClass() ? Can you add some tests ? Thanks ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services --- Key: HBASE-1936 URL: https://issues.apache.org/jira/browse/HBASE-1936 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Jieshan Bean Labels: noob Attachments: HBASE-1936-trunk(forReview).patch, cp_from_hdfs.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5547) Don't delete HFiles when in backup mode
[ https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253567#comment-13253567 ] jirapos...@reviews.apache.org commented on HBASE-5547: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/4633/#review6914 --- src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java https://reviews.apache.org/r/4633/#comment15366 Can the checks in the while loop be reversed, so that we check the counter first and then call the allRegionsDoneBackup(). Will this help in removing the call to disableHFileBackup, or am i missing anything here. src/main/java/org/apache/hadoop/hbase/client/HFileArchiveManager.java https://reviews.apache.org/r/4633/#comment15363 yeah, instantiation of this call is done more than once in a single code path (call to HBaseAdmin-enableHFileBackup()). Good to have some closing behavior. src/main/java/org/apache/hadoop/hbase/client/HFileArchiveManager.java https://reviews.apache.org/r/4633/#comment15364 How costly is this op; the reason I ask is it will be useful only once; once the znode is there, this call has no value. (Still learning zk stuff, kindly ignore if you think so) src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java https://reviews.apache.org/r/4633/#comment15375 minor nit: is enabled? src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java https://reviews.apache.org/r/4633/#comment15376 you made the deleteRegion to rawDeleteRegion? lol, but that sounds a bit secondary :) Why not keep delete or sth deleteWithNoArchive? src/main/java/org/apache/hadoop/hbase/zookeeper/HFileArchiveTracker.java https://reviews.apache.org/r/4633/#comment15369 Yes, please add some doc to it. It is instantiated in HMaster, so can't be abstract? Tracker and Monitor doesn't seem to give a clearer picture, somehow, especially, when tracker implements the monitor? - Himanshu On 2012-04-07 19:51:11, Jesse Yates wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/4633/ bq. --- bq. bq. (Updated 2012-04-07 19:51:11) bq. bq. bq. Review request for hbase, Michael Stack and Lars Hofhansl. bq. bq. bq. Summary bq. --- bq. bq. Essentially, whenever an hfile would be deleted, it is instead moved to the archive directory. In this impl, the archive directory is on a per table basis, but defaults to '.archive'. Removing hfiles occurs in three places - compaction, merge and catalog janitor. The former and two latter are distinctly different code paths, but but did pull out some similarities. The latter two end up calling the same method, so there should be a reasonable amount of overlap. bq. bq. Implementation wise: bq. Updated the HMasterInterface to pass the calls onto the zookeeper. bq. Added a zk listener to handle updates from the master to the RS to backup. bq. Added a utility for removing files and finding archive directories bq. Added tests for the regionserver and catalogjanitor approaches. bq. Added creation of manager in regionserver. bq. bq. bq. This addresses bug HBASE-5547. bq. https://issues.apache.org/jira/browse/HBASE-5547 bq. bq. bq. Diffs bq. - bq. bq.src/main/java/org/apache/hadoop/hbase/HConstants.java a4b989e bq.src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java 16e4017 bq.src/main/java/org/apache/hadoop/hbase/client/HFileArchiveManager.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java 79d5fdd bq.src/main/java/org/apache/hadoop/hbase/master/HMaster.java fb21bdd bq. src/main/java/org/apache/hadoop/hbase/regionserver/HFileArchiveMonitor.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java c3df319 bq.src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 8a61f7d bq. src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java 6884d53 bq.src/main/java/org/apache/hadoop/hbase/regionserver/Store.java 509a467 bq.src/main/java/org/apache/hadoop/hbase/util/HFileArchiveUtil.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/zookeeper/HFileArchiveTracker.java PRE-CREATION bq. src/main/java/org/apache/hadoop/hbase/zookeeper/RegionServerHFileTracker.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java 4fc105f bq.src/main/resources/hbase-default.xml 44ee689 bq.src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java
[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region
[ https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-5677: - Fix Version/s: (was: 0.96.0) (was: 0.94.0) Removed 0.94 and 0.96 from Fix Version/s The master never does balance because duplicate openhandled the one region -- Key: HBASE-5677 URL: https://issues.apache.org/jira/browse/HBASE-5677 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Environment: 0.90 Reporter: xufeng Assignee: xufeng Fix For: 0.90.7, 0.92.2 Attachments: 5677-proposal.txt, 5677-proposal.txt, 5677-proposal.txt, HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, surefire-report_patched_v1.html If region be assigned When the master is doing initialization(before do processFailover),the region will be duplicate openhandled. because the unassigned node in zookeeper will be handled again in AssignmentManager#processFailover() it cause the region in RIT,thus the master never does balance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5677) The master never does balance because duplicate openhandled the one region
[ https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihong Yu updated HBASE-5677: -- Attachment: (was: 5677-proposal.txt) The master never does balance because duplicate openhandled the one region -- Key: HBASE-5677 URL: https://issues.apache.org/jira/browse/HBASE-5677 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Environment: 0.90 Reporter: xufeng Assignee: xufeng Fix For: 0.90.7, 0.92.2 Attachments: 5677-proposal.txt, 5677-proposal.txt, HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, surefire-report_patched_v1.html If region be assigned When the master is doing initialization(before do processFailover),the region will be duplicate openhandled. because the unassigned node in zookeeper will be handled again in AssignmentManager#processFailover() it cause the region in RIT,thus the master never does balance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253588#comment-13253588 ] Lars Hofhansl commented on HBASE-5778: -- I still don't understand why this is a problem with replication. J-D do you have any insights? Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.96.0, 0.94.1 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5677) The master never does balance because duplicate openhandled the one region
[ https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253593#comment-13253593 ] Hadoop QA commented on HBASE-5677: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522508/5677-proposal.txt against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestWALPlayer Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1514//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1514//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1514//console This message is automatically generated. The master never does balance because duplicate openhandled the one region -- Key: HBASE-5677 URL: https://issues.apache.org/jira/browse/HBASE-5677 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Environment: 0.90 Reporter: xufeng Assignee: xufeng Fix For: 0.90.7, 0.92.2 Attachments: 5677-proposal.txt, 5677-proposal.txt, HBASE-5677-90-v1.patch, surefire-report_no_patched_v1.html, surefire-report_patched_v1.html If region be assigned When the master is doing initialization(before do processFailover),the region will be duplicate openhandled. because the unassigned node in zookeeper will be handled again in AssignmentManager#processFailover() it cause the region in RIT,thus the master never does balance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5772) Unable to open the few links in http://hbase.apache.org/
[ https://issues.apache.org/jira/browse/HBASE-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5772: - Attachment: 5772.txt Site was broke last time I gen'd and published it. Breakage comes of my use of mvn3 rather than mvn2 generating the site and a few small changes that crept into pom over time. This patch pulls config for the site-plugin up into the plugins section where it will be noticed and removes mention of the plugin from reporting where config was ignored (mvn3). We were not picking up our custom template so we got $banner.name on RHS. Updated some of the plugins to go w/ mvn3; site, findbugs, javadoc. Javadoc report was taking too long; added exclusion of generated classes. That helps. Rat report is not made unless you do release. In fact all reports were broke -- no license, etc. -- fixed by upgrading site and reports plugin. Added tweak on the css to get rid of some white space. Unable to open the few links in http://hbase.apache.org/ Key: HBASE-5772 URL: https://issues.apache.org/jira/browse/HBASE-5772 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.94.0 Reporter: Kiran BC Attachments: 5772.txt Few links in http://hbase.apache.org/ is not working. For example, Ref Guide (multi-page) will actually link to http://hbase.apache.org/book/book.html and if I try to open this, Page not found error is coming. If I add /book in the url, like http://hbase.apache.org/book/book/book.html, it is taking me to the Apache HBase Reference Guide I think the folder structure has been changed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5772) Unable to open the few links in http://hbase.apache.org/
[ https://issues.apache.org/jira/browse/HBASE-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-5772. -- Resolution: Fixed Fix Version/s: 0.96.0 Assignee: stack I just committed this and published the generated site. Should show up in an hour or two. Unable to open the few links in http://hbase.apache.org/ Key: HBASE-5772 URL: https://issues.apache.org/jira/browse/HBASE-5772 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.94.0 Reporter: Kiran BC Assignee: stack Fix For: 0.96.0 Attachments: 5772.txt Few links in http://hbase.apache.org/ is not working. For example, Ref Guide (multi-page) will actually link to http://hbase.apache.org/book/book.html and if I try to open this, Page not found error is coming. If I add /book in the url, like http://hbase.apache.org/book/book/book.html, it is taking me to the Apache HBase Reference Guide I think the folder structure has been changed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4608) HLog Compression
[ https://issues.apache.org/jira/browse/HBASE-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-4608: - Release Note: Adds a custom dictionary-based compression on WAL. Off by default. To enable, set hbase.regionserver.wal.enablecompression to true in hbase-site.xml. Note that replication is currently broken when WAL compression is enabled. was:Adds a custom dictionary-based compression on WAL. Off by default. To enable, set hbase.regionserver.wal.enablecompression to true in hbase-site.xml. HLog Compression Key: HBASE-4608 URL: https://issues.apache.org/jira/browse/HBASE-4608 Project: HBase Issue Type: New Feature Reporter: Li Pi Assignee: Li Pi Fix For: 0.94.0 Attachments: 4608-v19.txt, 4608-v20.txt, 4608-v22.txt, 4608v1.txt, 4608v13.txt, 4608v13.txt, 4608v14.txt, 4608v15.txt, 4608v16.txt, 4608v17.txt, 4608v18.txt, 4608v23.txt, 4608v24.txt, 4608v25.txt, 4608v27.txt, 4608v29.txt, 4608v30.txt, 4608v5.txt, 4608v6.txt, 4608v7.txt, 4608v8fixed.txt, hbase-4608-v28-delta.txt, hbase-4608-v28.txt, hbase-4608-v28.txt The current bottleneck to HBase write speed is replicating the WAL appends across different datanodes. We can speed up this process by compressing the HLog. Current plan involves using a dictionary to compress table name, region id, cf name, and possibly other bits of repeated data. Also, HLog format may be changed in other ways to produce a smaller HLog. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5547) Don't delete HFiles when in backup mode
[ https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253606#comment-13253606 ] jirapos...@reviews.apache.org commented on HBASE-5547: -- --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/4633/#review6917 --- src/main/java/org/apache/hadoop/hbase/regionserver/Store.java https://reviews.apache.org/r/4633/#comment15382 You wouldn't have chance to do that if it fails as there will be an IOException? src/main/java/org/apache/hadoop/hbase/regionserver/Store.java https://reviews.apache.org/r/4633/#comment15385 I see; you trying to archive files, and just delete in case there is any failure for some hfiles. src/main/java/org/apache/hadoop/hbase/zookeeper/RegionServerHFileTracker.java https://reviews.apache.org/r/4633/#comment15387 I miss this; who is calling this addTable? It would be great to have the usecase. (sorry for reviewing it piece meal, its a huge patch as per my standards :)) src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java https://reviews.apache.org/r/4633/#comment15389 And, what are the better alternatives (some subclass holding all these constants, or something more aesthetic). - Himanshu On 2012-04-07 19:51:11, Jesse Yates wrote: bq. bq. --- bq. This is an automatically generated e-mail. To reply, visit: bq. https://reviews.apache.org/r/4633/ bq. --- bq. bq. (Updated 2012-04-07 19:51:11) bq. bq. bq. Review request for hbase, Michael Stack and Lars Hofhansl. bq. bq. bq. Summary bq. --- bq. bq. Essentially, whenever an hfile would be deleted, it is instead moved to the archive directory. In this impl, the archive directory is on a per table basis, but defaults to '.archive'. Removing hfiles occurs in three places - compaction, merge and catalog janitor. The former and two latter are distinctly different code paths, but but did pull out some similarities. The latter two end up calling the same method, so there should be a reasonable amount of overlap. bq. bq. Implementation wise: bq. Updated the HMasterInterface to pass the calls onto the zookeeper. bq. Added a zk listener to handle updates from the master to the RS to backup. bq. Added a utility for removing files and finding archive directories bq. Added tests for the regionserver and catalogjanitor approaches. bq. Added creation of manager in regionserver. bq. bq. bq. This addresses bug HBASE-5547. bq. https://issues.apache.org/jira/browse/HBASE-5547 bq. bq. bq. Diffs bq. - bq. bq.src/main/java/org/apache/hadoop/hbase/HConstants.java a4b989e bq.src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java 16e4017 bq.src/main/java/org/apache/hadoop/hbase/client/HFileArchiveManager.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java 79d5fdd bq.src/main/java/org/apache/hadoop/hbase/master/HMaster.java fb21bdd bq. src/main/java/org/apache/hadoop/hbase/regionserver/HFileArchiveMonitor.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java c3df319 bq.src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java 8a61f7d bq. src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java 6884d53 bq.src/main/java/org/apache/hadoop/hbase/regionserver/Store.java 509a467 bq.src/main/java/org/apache/hadoop/hbase/util/HFileArchiveUtil.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/zookeeper/HFileArchiveTracker.java PRE-CREATION bq. src/main/java/org/apache/hadoop/hbase/zookeeper/RegionServerHFileTracker.java PRE-CREATION bq.src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java 4fc105f bq.src/main/resources/hbase-default.xml 44ee689 bq.src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java 41616c8 bq.src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java b4dcb83 bq. src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionHFileArchiving.java PRE-CREATION bq.src/test/java/org/apache/hadoop/hbase/util/HFileArchiveTestingUtil.java PRE-CREATION bq.src/test/java/org/apache/hadoop/hbase/util/MockRegionServerServices.java 7d02759 bq. bq. Diff: https://reviews.apache.org/r/4633/diff bq. bq. bq. Testing bq. --- bq. bq. Added two tests for the separate cases - archiving via the regionserver and for the catalog tracker. Former runs in a mini cluster and also touches the changes to HMasterInterface and zookeeper. bq. bq. bq. Thanks, bq. bq.
[jira] [Commented] (HBASE-5604) M/R tool to replay WAL files
[ https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253607#comment-13253607 ] Zhihong Yu commented on HBASE-5604: --- Looking at test failure reported by Hadoop QA: https://builds.apache.org/job/PreCommit-HBASE-Build/1514//testReport/org.apache.hadoop.hbase.mapreduce/TestWALPlayer/testTimeFormat/ {code} java.lang.AssertionError: expected:1334092861001 but was:1334067661001 {code} I wonder if timezone could be an issue here - the difference is 7 hours. If you don't want to involve call such as setTimeZone(TimeZone.getTimeZone(“America/Los_Angeles”)), please comment out: {code} assertEquals(1334092861001L, conf.getLong(HLogInputFormat.END_TIME_KEY, 0)); {code} M/R tool to replay WAL files Key: HBASE-5604 URL: https://issues.apache.org/jira/browse/HBASE-5604 Project: HBase Issue Type: New Feature Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0, 0.96.0 Attachments: 5604-v10.txt, 5604-v11.txt, 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt Just an idea I had. Might be useful for restore of a backup using the HLogs. This could an M/R (with a mapper per HLog file). The tool would get a timerange and a (set of) table(s). We'd pick the right HLogs based on time before the M/R job is started and then have a mapper per HLog file. The mapper would then go through the HLog, filter all WALEdits that didn't fit into the time range or are not any of the tables and then uses HFileOutputFormat to generate HFiles. Would need to indicate the splits we want, probably from a live table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253611#comment-13253611 ] Jean-Daniel Cryans commented on HBASE-5778: --- I haven't had a look, but I'd guess that if we're reading files that are being written then we don't have access to the dict. Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.96.0, 0.94.1 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253614#comment-13253614 ] Lars Hofhansl commented on HBASE-5778: -- Oh I see. The KVs are only decompressed when read. Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.96.0, 0.94.1 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253615#comment-13253615 ] Hudson commented on HBASE-5778: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5778 Turn on WAL compression by default: REVERT (Revision 1325803) HBASE-5778 Turn on WAL compression by default (Revision 1325567) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java jdcryans : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.96.0, 0.94.1 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5775) ZKUtil doesn't handle deleteRecurisively cleanly
[ https://issues.apache.org/jira/browse/HBASE-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253617#comment-13253617 ] Hudson commented on HBASE-5775: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5775 ZKUtil doesn't handle deleteRecurisively cleanly (Jesse Yates) (Revision 1325541) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java ZKUtil doesn't handle deleteRecurisively cleanly Key: HBASE-5775 URL: https://issues.apache.org/jira/browse/HBASE-5775 Project: HBase Issue Type: Improvement Affects Versions: 0.94.0 Reporter: Jesse Yates Assignee: Jesse Yates Fix For: 0.94.0, 0.96.0 Attachments: java_HBASE-5775.patch ZKUtil.deleteNodeRecursively()'s contract says that it handles deletion of the node and all its children. However, nothing is mentioned as to what happens if the node you are attempting to delete doesn't actually exist. Turns out, it throws a null pointer exception. I 'm proposing that we change the code s.t. it handles the case where the parent is already gone and exits cleanly, rather than failing horribly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253616#comment-13253616 ] Hudson commented on HBASE-5488: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5488 OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property (gaojinchao) (Revision 1325626) Result = SUCCESS jmhsieh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRepair.java OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5773) HtablePool constructor not reading config files in certain cases
[ https://issues.apache.org/jira/browse/HBASE-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253618#comment-13253618 ] Hudson commented on HBASE-5773: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5773 HtablePool constructor not reading config files in certain cases (Revision 1325381) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java HtablePool constructor not reading config files in certain cases Key: HBASE-5773 URL: https://issues.apache.org/jira/browse/HBASE-5773 Project: HBase Issue Type: Bug Components: client Affects Versions: 0.90.6, 0.92.1, 0.94.1 Reporter: Ioan Eugen Stan Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0 Attachments: different-config-behaviour.90.patch, different-config-behaviour.90.patch, different-config-behaviour.patch Creating a HtablePool can issue two behaviour depanding on the constructor called. Case 1: loads the configs from hbase-site public HTablePool() { this(HBaseConfiguration.create(), Integer.MAX_VALUE); } Calling this with null values for Configuration: public HTablePool(final Configuration config, final int maxSize) { this(config, maxSize, null, null); } will issue: public HTablePool(final Configuration config, final int maxSize, final HTableInterfaceFactory tableFactory, PoolType poolType) { // Make a new configuration instance so I can safely cleanup when // done with the pool. this.config = config == null ? new Configuration() : config; which does not read the hbase-site config files as HBaseConfiguration.create() does. I've tracked this problem to all versions of hbase. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5618) SplitLogManager - prevent unnecessary attempts to resubmits
[ https://issues.apache.org/jira/browse/HBASE-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253619#comment-13253619 ] Hudson commented on HBASE-5618: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5618 SplitLogManager - prevent unnecessary attempts to resubmits (Revision 1310922) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java SplitLogManager - prevent unnecessary attempts to resubmits --- Key: HBASE-5618 URL: https://issues.apache.org/jira/browse/HBASE-5618 Project: HBase Issue Type: Improvement Components: wal, zookeeper Reporter: Prakash Khemani Assignee: Prakash Khemani Fix For: 0.92.2, 0.94.0 Attachments: 0001-HBASE-5618-SplitLogManager-prevent-unnecessary-attem.patch, 0001-HBASE-5618-SplitLogManager-prevent-unnecessary-attem.patch, 0001-HBASE-5618-SplitLogManager-prevent-unnecessary-attem.patch, 0001-HBASE-5618-SplitLogManager-prevent-unnecessary-attem.patch, 0001-HBASE-5618-SplitLogManager-prevent-unnecessary-attem.patch Currently once a watch fires that the task node has been updated (hearbeated) by the worker, the splitlogmanager still quite some time before it updates the last heard from time. This is because the manager currently schedules another getDataSetWatch() and only after that finishes will it update the task's last heard from time. This leads to a large number of zk-BadVersion warnings when resubmission is continuously attempted and it fails. Two changes should be made (1) On a resubmission failure because of BadVersion the task's lastUpdate time should get upped. (2) The task's lastUpdate time should get upped as soon as the nodeDataChanged() watch fires and without waiting for getDataSetWatch() to complete. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5758) Forward port HBASE-4109 Hostname returned via reverse dns lookup contains trailing period if configured interface is not 'default'
[ https://issues.apache.org/jira/browse/HBASE-5758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253621#comment-13253621 ] Hudson commented on HBASE-5758: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5758 Forward port HBASE-4109 Hostname returned via reverse dns lookup contains trailing period if configured interface is not default (Revision 1311822) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/Strings.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java Forward port HBASE-4109 Hostname returned via reverse dns lookup contains trailing period if configured interface is not 'default' Key: HBASE-5758 URL: https://issues.apache.org/jira/browse/HBASE-5758 Project: HBase Issue Type: Task Reporter: stack Assignee: stack Fix For: 0.92.2, 0.94.0 Attachments: 5758.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5604) M/R tool to replay WAL files
[ https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253620#comment-13253620 ] Hudson commented on HBASE-5604: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5604 addendum, mark TestWALPlayer as LargeTest (Revision 1325608) HBASE-5604 M/R tool to replay WAL files (Revision 1325562) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java larsh : Files : * /hbase/branches/0.94/src/docbkx/ops_mgt.xml * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/HLogInputFormat.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java M/R tool to replay WAL files Key: HBASE-5604 URL: https://issues.apache.org/jira/browse/HBASE-5604 Project: HBase Issue Type: New Feature Reporter: Lars Hofhansl Assignee: Lars Hofhansl Fix For: 0.94.0, 0.96.0 Attachments: 5604-v10.txt, 5604-v11.txt, 5604-v4.txt, 5604-v6.txt, 5604-v7.txt, 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt Just an idea I had. Might be useful for restore of a backup using the HLogs. This could an M/R (with a mapper per HLog file). The tool would get a timerange and a (set of) table(s). We'd pick the right HLogs based on time before the M/R job is started and then have a mapper per HLog file. The mapper would then go through the HLog, filter all WALEdits that didn't fit into the time range or are not any of the tables and then uses HFileOutputFormat to generate HFiles. Would need to indicate the splits we want, probably from a live table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5689) Skipping RecoveredEdits may cause data loss
[ https://issues.apache.org/jira/browse/HBASE-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253623#comment-13253623 ] Hudson commented on HBASE-5689: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5689 Skipping RecoveredEdits may cause data loss (Chunhui) (Revision 1310787) Result = SUCCESS tedyu : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java Skipping RecoveredEdits may cause data loss --- Key: HBASE-5689 URL: https://issues.apache.org/jira/browse/HBASE-5689 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.94.0 Reporter: chunhui shen Assignee: chunhui shen Priority: Critical Fix For: 0.94.0 Attachments: 5689-testcase.patch, 5689-v4.txt, HBASE-5689.patch, HBASE-5689.patch, HBASE-5689v2.patch, HBASE-5689v3.patch, HBASE-5689v3.patch Let's see the following scenario: 1.Region is on the server A 2.put KV(r1-v1) to the region 3.move region from server A to server B 4.put KV(r2-v2) to the region 5.move region from server B to server A 6.put KV(r3-v3) to the region 7.kill -9 server B and start it 8.kill -9 server A and start it 9.scan the region, we could only get two KV(r1-v1,r2-v2), the third KV(r3-v3) is lost. Let's analyse the upper scenario from the code: 1.the edit logs of KV(r1-v1) and KV(r3-v3) are both recorded in the same hlog file on server A. 2.when we split server B's hlog file in the process of ServerShutdownHandler, we create one RecoveredEdits file f1 for the region. 2.when we split server A's hlog file in the process of ServerShutdownHandler, we create another RecoveredEdits file f2 for the region. 3.however, RecoveredEdits file f2 will be skiped when initializing region HRegion#replayRecoveredEditsIfAny {code} for (Path edits: files) { if (edits == null || !this.fs.exists(edits)) { LOG.warn(Null or non-existent edits file: + edits); continue; } if (isZeroLengthThenDelete(this.fs, edits)) continue; if (checkSafeToSkip) { Path higher = files.higher(edits); long maxSeqId = Long.MAX_VALUE; if (higher != null) { // Edit file name pattern, HLog.EDITFILES_NAME_PATTERN: -?[0-9]+ String fileName = higher.getName(); maxSeqId = Math.abs(Long.parseLong(fileName)); } if (maxSeqId = minSeqId) { String msg = Maximum possible sequenceid for this log is + maxSeqId + , skipped the whole file, path= + edits; LOG.debug(msg); continue; } else { checkSafeToSkip = false; } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5736) ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly
[ https://issues.apache.org/jira/browse/HBASE-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253624#comment-13253624 ] Hudson commented on HBASE-5736: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5736 ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly (Scott Chen) (Revision 1325067) Result = SUCCESS tedyu : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java ThriftServerRunner.HbaseHandler.mutateRow() does not use ByteBuffer correctly - Key: HBASE-5736 URL: https://issues.apache.org/jira/browse/HBASE-5736 Project: HBase Issue Type: Bug Reporter: Scott Chen Assignee: Scott Chen Fix For: 0.94.0, 0.96.0 Attachments: 5736-94.txt, HBASE-5736.D2649.1.patch, HBASE-5736.D2649.2.patch, HBASE-5736.D2649.3.patch We have fixed similar bug in https://issues.apache.org/jira/browse/HBASE-5507 It uses ByteBuffer.array() to read the ByteBuffer. This will ignore the offset return the whole underlying byte array. The bug can be triggered by using framed Transport thrift servers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5735) Clearer warning message when connecting a non-secure HBase client to a secure HBase server
[ https://issues.apache.org/jira/browse/HBASE-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253625#comment-13253625 ] Hudson commented on HBASE-5735: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5735 Clearer warning message when connecting a non-secure HBase client to a secure HBase server (Revision 1310915) Result = SUCCESS stack : Files : * /hbase/branches/0.94/security/src/main/java/org/apache/hadoop/hbase/ipc/SecureServer.java Clearer warning message when connecting a non-secure HBase client to a secure HBase server -- Key: HBASE-5735 URL: https://issues.apache.org/jira/browse/HBASE-5735 Project: HBase Issue Type: Improvement Components: security Affects Versions: 0.92.1, 0.94.0 Reporter: Shaneal Manek Assignee: Shaneal Manek Priority: Trivial Fix For: 0.92.2, 0.94.0 Attachments: HBASE-5375-v2.patch, HBASE-5375.patch, HBASE-5735-v3.patch When a connection from a non secure-rpc-engine client is attempted the warning message you get is related to version mismatch: Mar 28, 3:27:13 PM WARN org.apache.hadoop.ipc.SecureServer Incorrect header or version mismatch from 172.29.82.121:43849 got version 3 expected version 4 While this is true, it isn't as useful as it could be. A more specific error message warning end users that they're connecting with a non-secure client may be more useful. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4109) Hostname returned via reverse dns lookup contains trailing period if configured interface is not default
[ https://issues.apache.org/jira/browse/HBASE-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253627#comment-13253627 ] Hudson commented on HBASE-4109: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5758 Forward port HBASE-4109 Hostname returned via reverse dns lookup contains trailing period if configured interface is not default (Revision 1311822) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/Strings.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java Hostname returned via reverse dns lookup contains trailing period if configured interface is not default -- Key: HBASE-4109 URL: https://issues.apache.org/jira/browse/HBASE-4109 Project: HBase Issue Type: Bug Components: master, regionserver Affects Versions: 0.90.3 Reporter: Shrijeet Paliwal Assignee: Shrijeet Paliwal Fix For: 0.90.4 Attachments: 0001-HBASE-4109-Sanitize-hostname-returned-from-DNS-class.patch If you are using an interface anything other than 'default' (literally that keyword) DNS.java 's getDefaultHost will return a string which will have a trailing period at the end. It seems javadoc of reverseDns in DNS.java (see below) is conflicting with what that function is actually doing. It is returning a PTR record while claims it returns a hostname. The PTR record always has period at the end , RFC: http://irbs.net/bog-4.9.5/bog47.html We make call to DNS.getDefaultHost at more than one places and treat that as actual hostname. Quoting HRegionServer for example {code} String machineName = DNS.getDefaultHost(conf.get( hbase.regionserver.dns.interface, default), conf.get( hbase.regionserver.dns.nameserver, default)); {code} This causes inconsistencies. An example of such inconsistency was observed while debugging the issue Regions not getting reassigned if RS is brought down. More here http://search-hadoop.com/m/CANUA1qRCkQ1 We may want to sanitize the string returned from DNS class. Or better we can take a path of overhauling the way we do DNS name matching all over. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5656) LoadIncrementalHFiles createTable should detect and set compression algorithm
[ https://issues.apache.org/jira/browse/HBASE-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253626#comment-13253626 ] Hudson commented on HBASE-5656: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5656 LoadIncrementalHFiles createTable should detect and set compression algorithm(Cosmin Lehene) (Revision 1311104) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java LoadIncrementalHFiles createTable should detect and set compression algorithm - Key: HBASE-5656 URL: https://issues.apache.org/jira/browse/HBASE-5656 Project: HBase Issue Type: Bug Components: util Affects Versions: 0.92.1 Reporter: Cosmin Lehene Assignee: Cosmin Lehene Fix For: 0.92.2, 0.94.0, 0.96.0 Attachments: 5656-simple.txt, HBASE-5656-0.92.patch, HBASE-5656-0.92.patch, HBASE-5656-0.92.patch, HBASE-5656-0.92.patch Original Estimate: 1h Remaining Estimate: 1h LoadIncrementalHFiles doesn't set compression when creating the the table. This can be detected from the files within each family dir. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5770) Add a clock skew warning threshold
[ https://issues.apache.org/jira/browse/HBASE-5770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253628#comment-13253628 ] Hudson commented on HBASE-5770: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5770 Add a clock skew warning threshold (Revision 1325388) Result = SUCCESS stack : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java Add a clock skew warning threshold -- Key: HBASE-5770 URL: https://issues.apache.org/jira/browse/HBASE-5770 Project: HBase Issue Type: Improvement Components: master, regionserver Reporter: Ian Varley Assignee: Ian Varley Priority: Minor Fix For: 0.94.0 Attachments: HBASE_5770_v1.patch Original Estimate: 1h Remaining Estimate: 1h There's currently an exception thrown by the master when a region server attempts to start up with clock skew greater than some configured amount (defaulting to 30 seconds). However, it'd be nice to get some warnings logged at a value that isn't severe enough to warrant killing the RS, but still represents significant skew that could affect correctness. Will attach a simple patch to add this as a setting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3443) ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix
[ https://issues.apache.org/jira/browse/HBASE-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253629#comment-13253629 ] Hudson commented on HBASE-3443: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-3443 ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix (Revision 1325453) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix -- Key: HBASE-3443 URL: https://issues.apache.org/jira/browse/HBASE-3443 Project: HBase Issue Type: Bug Components: regionserver Affects Versions: 0.90.0, 0.90.1, 0.90.2, 0.90.3, 0.90.4, 0.90.5, 0.90.6, 0.92.0, 0.92.1 Reporter: Kannan Muthukkaruppan Assignee: Lars Hofhansl Priority: Critical Labels: corruption Fix For: 0.94.0, 0.96.0 Attachments: 3443.txt For incrementColumnValue() HBASE-3082 adds an optimization to check memstores first, and only if not present in the memstore then check the store files. In the presence of deletes, the above optimization is not reliable. If the column is marked as deleted in the memstore, one should not look further into the store files. But currently, the code does so. Sample test code outline: {code} admin.createTable(desc) table = HTable.new(conf, tableName) table.incrementColumnValue(Bytes.toBytes(row), cf1name, Bytes.toBytes(column), 5); admin.flush(tableName) sleep(2) del = Delete.new(Bytes.toBytes(row)) table.delete(del) table.incrementColumnValue(Bytes.toBytes(row), cf1name, Bytes.toBytes(column), 5); get = Get.new(Bytes.toBytes(row)) keyValues = table.get(get).raw() keyValues.each do |keyValue| puts Expect 5; Got Value=#{Bytes.toLong(keyValue.getValue())}; end {code} The above prints: {code} Expect 5; Got Value=10 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5720) HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums
[ https://issues.apache.org/jira/browse/HBASE-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253630#comment-13253630 ] Hudson commented on HBASE-5720: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5720 HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums (Matt Corgan) (Revision 1310932) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java HFileDataBlockEncoderImpl uses wrong header size when reading HFiles with no checksums -- Key: HBASE-5720 URL: https://issues.apache.org/jira/browse/HBASE-5720 Project: HBase Issue Type: Bug Components: io, regionserver Affects Versions: 0.94.0 Reporter: Matt Corgan Assignee: Matt Corgan Priority: Blocker Fix For: 0.94.0 Attachments: 5720-trunk-v2.txt, 5720-trunk.txt, 5720v4.txt, 5720v4.txt, 5720v4.txt, HBASE-5720-v1.patch, HBASE-5720-v2.patch, HBASE-5720-v3.patch When reading a .92 HFile without checksums, encoding it, and storing in the block cache, the HFileDataBlockEncoderImpl always allocates a dummy header appropriate for checksums even though there are none. This corrupts the byte[]. Attaching a patch that allocates a DUMMY_HEADER_NO_CHECKSUM in that case which I think is the desired behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5599) [hbck] handle NO_VERSION_FILE and SHOULD_NOT_BE_DEPLOYED inconsistencies
[ https://issues.apache.org/jira/browse/HBASE-5599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253634#comment-13253634 ] Hudson commented on HBASE-5599: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5599 [hbck] handle NO_VERSION_FILE and SHOULD_NOT_BE_DEPLOYED inconsistencies (fulin wang) (Revision 1324880) Result = SUCCESS jmhsieh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/hbck/HbckTestingUtil.java [hbck] handle NO_VERSION_FILE and SHOULD_NOT_BE_DEPLOYED inconsistencies Key: HBASE-5599 URL: https://issues.apache.org/jira/browse/HBASE-5599 Project: HBase Issue Type: New Feature Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Assignee: fulin wang Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: 0.90-surefire-report-hbck.html, hbase-5599-0.90.patch, hbase-5599-0.90_v2.patch, hbase-5599-0.90_v3.patch, hbase-5599-0.90_v5.patch, hbase-5599-0.90_v6.patch, hbase-5599-0.90_v7.patch, hbase-5599-0.90_v8, hbase-5599-0.92_v5.patch, hbase-5599-0.94_v5.patch, hbase-5599-92-v8.patch, hbase-5599-trunk_v5.patch, hbase-5599-trunk_v7.patch, hbase-5599-trunk_v8.patch, license.png The hbck tool can not fix the six scenarios. 1. Version file does not exist in root dir. Fix: I try to create a version file by 'FSUtils.setVersion' method. 2. [REGIONNAME][KEY] on HDFS, but not listed in META or deployed on any region server. Fix: I get region info form the hdfs file, this region info write to '.META.' table. 3. [REGIONNAME][KEY] not in META, but deployed on [SERVERNAME] Fix: I get region info form the hdfs file, this region info write to '.META.' table. 4. [REGIONNAME] should not be deployed according to META, but is deployed on [SERVERNAME] Fix: Close this region. 5. First region should start with an empty key. You need to create a new region and regioninfo in HDFS to plug the hole. Fix: The region info is not in hdfs and .META., so it create a empty region for this error. 6. There is a hole in the region chain between [KEY] and [KEY]. You need to create a new regioninfo and region dir in hdfs to plug the hole. Fix: The region info is not in hdfs and .META., so it create a empty region for this hole. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5717) Scanner metrics are only reported if you get to the end of a scanner
[ https://issues.apache.org/jira/browse/HBASE-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253631#comment-13253631 ] Hudson commented on HBASE-5717: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5717 Scanner metrics are only reported if you get to the end of a scanner (Ian Varley) (Revision 1325342) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java Scanner metrics are only reported if you get to the end of a scanner Key: HBASE-5717 URL: https://issues.apache.org/jira/browse/HBASE-5717 Project: HBase Issue Type: Bug Components: client, metrics Reporter: Ian Varley Assignee: Ian Varley Priority: Minor Fix For: 0.94.0, 0.96.0 Attachments: 5717-v4.patch, ClientScanner_HBASE_5717-v2.patch, ClientScanner_HBASE_5717-v3.patch, ClientScanner_HBASE_5717.patch Original Estimate: 4h Remaining Estimate: 4h When you turn on Scanner Metrics, the metrics are currently only made available if you run over all records available in the scanner. If you stop iterating before the end, the values are never flushed into the metrics object (in the Scan attribute). Will supply a patch with fix and test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3082) For ICV gets, first look in MemStore before reading StoreFiles
[ https://issues.apache.org/jira/browse/HBASE-3082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253632#comment-13253632 ] Hudson commented on HBASE-3082: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-3443 ICV optimization to look in memstore first and then store files (HBASE-3082) does not work when deletes are in the mix (Revision 1325453) Result = SUCCESS larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java For ICV gets, first look in MemStore before reading StoreFiles -- Key: HBASE-3082 URL: https://issues.apache.org/jira/browse/HBASE-3082 Project: HBase Issue Type: Improvement Components: regionserver Reporter: Jonathan Gray Assignee: Prakash Khemani Fix For: 0.90.0 Attachments: HBASE-3082-FINAL.patch For incrementColumnValue operations, it is possible to check MemStore for the column being incremented without sacrificing correctness. If the column is not found in MemStore, we would then have to do a normal Get that opens/checks all StoreFiles for the given Store. In practice, this makes increment operations significantly faster for recently/frequently incremented columns. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5719) Enhance hbck to sideline overlapped mega regions
[ https://issues.apache.org/jira/browse/HBASE-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253633#comment-13253633 ] Hudson commented on HBASE-5719: --- Integrated in HBase-0.94-security #9 (See [https://builds.apache.org/job/HBase-0.94-security/9/]) HBASE-5719 Enhance hbck to sideline overlapped mega regions (Jimmy Xiang) (Revision 1325403) Result = SUCCESS jmhsieh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/util/RegionSplitCalculator.java * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitCalculator.java Enhance hbck to sideline overlapped mega regions Key: HBASE-5719 URL: https://issues.apache.org/jira/browse/HBASE-5719 Project: HBase Issue Type: New Feature Components: hbck Affects Versions: 0.94.0, 0.96.0 Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: 5719.patch, 5719_0.90.patch, 5719_0.92.patch, 5719_0.94.patch, hbase-5719.patch, hbase-5719_0.90.patch, hbase-5719_0.92.patch, hbase-5719_0.94.patch, hbase-5719_v3-new.patch, hbase-5719_v3.patch If there are too many regions in one overlapped group (by default, more than 10), hbck currently doesn't merge them since it takes time. In this case, we can sideline some regions in the group and break the overlapping to fix the inconsistency. Later on, sidelined regions can be bulk loaded manually. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253636#comment-13253636 ] Hadoop QA commented on HBASE-5778: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522527/5778-addendum.txt against trunk revision . +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestWALPlayer org.apache.hadoop.hbase.coprocessor.TestClassLoading Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1515//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1515//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1515//console This message is automatically generated. Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.96.0, 0.94.1 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5741) ImportTsv does not check for table existence
[ https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253637#comment-13253637 ] Himanshu Vashishtha commented on HBASE-5741: I am waiting for Clint to give me some numbers of such use cases to make the cut; (Apparently, he is on vacation these days and will be back this Monday) Thanks. ImportTsv does not check for table existence - Key: HBASE-5741 URL: https://issues.apache.org/jira/browse/HBASE-5741 Project: HBase Issue Type: Bug Components: mapreduce Affects Versions: 0.90.4 Reporter: Clint Heath Assignee: Himanshu Vashishtha Fix For: 0.96.0, 0.94.1 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, HBase-5741.patch The usage statement for the importtsv command to hbase claims this: Note: if you do not use this option, then the target table must already exist in HBase (in reference to the importtsv.bulk.output command-line option) The truth is, the table must exist no matter what, importtsv cannot and will not create it for you. This is the case because the createSubmittableJob method of ImportTsv does not even attempt to check if the table exists already, much less create it: (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java) 305 HTable table = new HTable(conf, tableName); The HTable method signature in use there assumes the table exists and runs a meta scan on it: (From org.apache.hadoop.hbase.client.HTable.java) 142 * Creates an object to access a HBase table. ... 151 public HTable(Configuration conf, final String tableName) What we should do inside of createSubmittableJob is something similar to what the completebulkloads command would do: (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java) 690 boolean tableExists = this.doesTableExist(tableName); 691 if (!tableExists) this.createTable(tableName,dirPath); Currently the docs are misleading, the table in fact must exist prior to running importtsv. We should check if it exists rather than assume it's already there and throw the below exception: 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch META table: org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for table: myTable2, row=myTable2,,99 at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150) ... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5772) Unable to open the few links in http://hbase.apache.org/
[ https://issues.apache.org/jira/browse/HBASE-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253676#comment-13253676 ] Hudson commented on HBASE-5772: --- Integrated in HBase-TRUNK #2753 (See [https://builds.apache.org/job/HBase-TRUNK/2753/]) HBASE-5772 Unable to open the few links in http://hbase.apache.org/ (Revision 1325893) Result = FAILURE stack : Files : * /hbase/trunk/pom.xml * /hbase/trunk/src/site/resources/css/site.css Unable to open the few links in http://hbase.apache.org/ Key: HBASE-5772 URL: https://issues.apache.org/jira/browse/HBASE-5772 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.94.0 Reporter: Kiran BC Assignee: stack Fix For: 0.96.0 Attachments: 5772.txt Few links in http://hbase.apache.org/ is not working. For example, Ref Guide (multi-page) will actually link to http://hbase.apache.org/book/book.html and if I try to open this, Page not found error is coming. If I add /book in the url, like http://hbase.apache.org/book/book/book.html, it is taking me to the Apache HBase Reference Guide I think the folder structure has been changed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-5784) Enable mvn deploy of website
Enable mvn deploy of website Key: HBASE-5784 URL: https://issues.apache.org/jira/browse/HBASE-5784 Project: HBase Issue Type: Improvement Reporter: stack Up to this, deploy of website has been build local and then copy up to apache and put it into place under /www/hbase.apache.org. Change it so can have maven deploy the site. The good thing about having the latter do it is that its regular; permissions will always be the same so Doug and I won't be fighting each other when we stick stuff up there. Also, its a one step process rather than multiple. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5784) Enable mvn deploy of website
[ https://issues.apache.org/jira/browse/HBASE-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5784: - Attachment: 5784.txt Add distributionManagement section that only has website stuff in it. Enable mvn deploy of website Key: HBASE-5784 URL: https://issues.apache.org/jira/browse/HBASE-5784 Project: HBase Issue Type: Improvement Reporter: stack Attachments: 5784.txt Up to this, deploy of website has been build local and then copy up to apache and put it into place under /www/hbase.apache.org. Change it so can have maven deploy the site. The good thing about having the latter do it is that its regular; permissions will always be the same so Doug and I won't be fighting each other when we stick stuff up there. Also, its a one step process rather than multiple. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-5784) Enable mvn deploy of website
[ https://issues.apache.org/jira/browse/HBASE-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-5784. -- Resolution: Fixed Fix Version/s: 0.96.0 Assignee: stack Release Note: Build and deploy the site with the below command $ ~/bin/apache-maven-3.0.4/bin/mvn -X clean site:site site:deploy You must use mvn3. You will probably also need a settings.xml file under your ~/.m2/ directory as described here: http://www.apache.org/dev/publishing-maven-artifacts.html All site deploys should go via this path going forward because it ensures permissions up in apache making it so other members of hbase group can deploy w/o permission conflicts. Enable mvn deploy of website Key: HBASE-5784 URL: https://issues.apache.org/jira/browse/HBASE-5784 Project: HBase Issue Type: Improvement Reporter: stack Assignee: stack Fix For: 0.96.0 Attachments: 5784.txt Up to this, deploy of website has been build local and then copy up to apache and put it into place under /www/hbase.apache.org. Change it so can have maven deploy the site. The good thing about having the latter do it is that its regular; permissions will always be the same so Doug and I won't be fighting each other when we stick stuff up there. Also, its a one step process rather than multiple. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5784) Enable mvn deploy of website
[ https://issues.apache.org/jira/browse/HBASE-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253691#comment-13253691 ] stack commented on HBASE-5784: -- Committed to trunk Enable mvn deploy of website Key: HBASE-5784 URL: https://issues.apache.org/jira/browse/HBASE-5784 Project: HBase Issue Type: Improvement Reporter: stack Assignee: stack Fix For: 0.96.0 Attachments: 5784.txt Up to this, deploy of website has been build local and then copy up to apache and put it into place under /www/hbase.apache.org. Change it so can have maven deploy the site. The good thing about having the latter do it is that its regular; permissions will always be the same so Doug and I won't be fighting each other when we stick stuff up there. Also, its a one step process rather than multiple. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication
[ https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shaneal Manek updated HBASE-5780: - Attachment: HBASE-5780-v2.patch Throws an IllegalStateException on interruption. Currently running tests with the security profile (and will upload the results as soon as they finish). Fix race in HBase regionserver startup vs ZK SASL authentication Key: HBASE-5780 URL: https://issues.apache.org/jira/browse/HBASE-5780 Project: HBase Issue Type: Bug Components: security Affects Versions: 0.92.1, 0.94.0 Reporter: Shaneal Manek Assignee: Shaneal Manek Attachments: HBASE-5780-v2.patch, HBASE-5780.patch Secure RegionServers sometimes fail to start with the following backtrace: 2012-03-22 17:20:16,737 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during initialization, aborting org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hbase/shutdown at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77) at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569) at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634) at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253708#comment-13253708 ] stack commented on HBASE-5620: -- It passed for me. Let me commit this monster. Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253712#comment-13253712 ] stack commented on HBASE-5620: -- I think src/main/java/org/apache/hadoop/hbase/protobuf/ClientProtocol.java is in wrong package. Ditto for AdminProtocol. What you think Jimmy? Should we move them? Where should they go? At top level? Or into client package? Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-5620: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thats some pretty heavy lifting you got going on in here Jimmy. Good stuff. Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB
[ https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253714#comment-13253714 ] stack commented on HBASE-5620: -- Mind opening new issues Jimmy to do outstanding work like unit tests? Convert the client protocol of HRegionInterface to PB - Key: HBASE-5620 URL: https://issues.apache.org/jira/browse/HBASE-5620 Project: HBase Issue Type: Sub-task Components: ipc, master, migration, regionserver Reporter: Jimmy Xiang Assignee: Jimmy Xiang Fix For: 0.96.0 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, hbase-5620_v4.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5784) Enable mvn deploy of website
[ https://issues.apache.org/jira/browse/HBASE-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253728#comment-13253728 ] Hudson commented on HBASE-5784: --- Integrated in HBase-TRUNK #2754 (See [https://builds.apache.org/job/HBase-TRUNK/2754/]) HBASE-5784 Enable mvn deploy of website (Revision 1325917) Result = SUCCESS stack : Files : * /hbase/trunk/pom.xml * /hbase/trunk/src/site/xdoc/index.xml * /hbase/trunk/src/site/xdoc/old_news.xml Enable mvn deploy of website Key: HBASE-5784 URL: https://issues.apache.org/jira/browse/HBASE-5784 Project: HBase Issue Type: Improvement Reporter: stack Assignee: stack Fix For: 0.96.0 Attachments: 5784.txt Up to this, deploy of website has been build local and then copy up to apache and put it into place under /www/hbase.apache.org. Change it so can have maven deploy the site. The good thing about having the latter do it is that its regular; permissions will always be the same so Doug and I won't be fighting each other when we stick stuff up there. Also, its a one step process rather than multiple. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4336) Convert source tree into maven modules
[ https://issues.apache.org/jira/browse/HBASE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253734#comment-13253734 ] stack commented on HBASE-4336: -- +1 Tell us more about the issue Jesse. When I do mvn compile on a project of many modules, its fine except for the case where tests depend on the product of an earlier module? Convert source tree into maven modules -- Key: HBASE-4336 URL: https://issues.apache.org/jira/browse/HBASE-4336 Project: HBase Issue Type: Task Components: build Reporter: Gary Helmling Priority: Critical Fix For: 0.96.0 When we originally converted the build to maven we had a single core module defined, but later reverted this to a module-less build for the sake of simplicity. It now looks like it's time to re-address this, as we have an actual need for modules to: * provide a trimmed down client library that applications can make use of * more cleanly support building against different versions of Hadoop, in place of some of the reflection machinations currently required * incorporate the secure RPC engine that depends on some secure Hadoop classes I propose we start simply by refactoring into two initial modules: * core - common classes and utilities, and client-side code and interfaces * server - master and region server implementations and supporting code This would also lay the groundwork for incorporating the HBase security features that have been developed. Once the module structure is in place, security-related features could then be incorporated into a third module -- security -- after normal review and approval. The security module could then depend on secure Hadoop, without modifying the dependencies of the rest of the HBase code. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-4094) improve hbck tool to fix more hbase problem
[ https://issues.apache.org/jira/browse/HBASE-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-4094. -- Resolution: Fixed Resolving at Anoop's suggestion as dup of hbase-5128 improve hbck tool to fix more hbase problem --- Key: HBASE-4094 URL: https://issues.apache.org/jira/browse/HBASE-4094 Project: HBase Issue Type: New Feature Components: master Affects Versions: 0.90.3 Reporter: feng xu Fix For: 0.90.7 Attachments: HbaseFsck_TableChain.patch Original Estimate: 12h Remaining Estimate: 12h -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication
[ https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253748#comment-13253748 ] Hadoop QA commented on HBASE-5780: -- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12522612/HBASE-5780-v2.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. -1 findbugs. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests: org.apache.hadoop.hbase.mapreduce.TestWALPlayer Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/1517//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/1517//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/1517//console This message is automatically generated. Fix race in HBase regionserver startup vs ZK SASL authentication Key: HBASE-5780 URL: https://issues.apache.org/jira/browse/HBASE-5780 Project: HBase Issue Type: Bug Components: security Affects Versions: 0.92.1, 0.94.0 Reporter: Shaneal Manek Assignee: Shaneal Manek Attachments: HBASE-5780-v2.patch, HBASE-5780.patch Secure RegionServers sometimes fail to start with the following backtrace: 2012-03-22 17:20:16,737 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during initialization, aborting org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /hbase/shutdown at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518) at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494) at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77) at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569) at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532) at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634) at java.lang.Thread.run(Thread.java:662) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5778) Turn on WAL compression by default
[ https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253750#comment-13253750 ] Hudson commented on HBASE-5778: --- Integrated in HBase-TRUNK-security #170 (See [https://builds.apache.org/job/HBase-TRUNK-security/170/]) HBASE-5778 Turn on WAL compression by default (Revision 1325801) Result = FAILURE stack : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java Turn on WAL compression by default -- Key: HBASE-5778 URL: https://issues.apache.org/jira/browse/HBASE-5778 Project: HBase Issue Type: Improvement Reporter: Jean-Daniel Cryans Assignee: Lars Hofhansl Priority: Blocker Fix For: 0.96.0, 0.94.1 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch I ran some tests to verify if WAL compression should be turned on by default. For a use case where it's not very useful (values two order of magnitude bigger than the keys), the insert time wasn't different and the CPU usage 15% higher (150% CPU usage VS 130% when not compressing the WAL). When values are smaller than the keys, I saw a 38% improvement for the insert run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure WAL compression accounts for all the additional CPU usage, it might just be that we're able to insert faster and we spend more time in the MemStore per second (because our MemStores are bad when they contain tens of thousands of values). Those are two extremes, but it shows that for the price of some CPU we can save a lot. My machines have 2 quads with HT, so I still had a lot of idle CPUs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5772) Unable to open the few links in http://hbase.apache.org/
[ https://issues.apache.org/jira/browse/HBASE-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253752#comment-13253752 ] Hudson commented on HBASE-5772: --- Integrated in HBase-TRUNK-security #170 (See [https://builds.apache.org/job/HBase-TRUNK-security/170/]) HBASE-5772 Unable to open the few links in http://hbase.apache.org/ (Revision 1325893) Result = FAILURE stack : Files : * /hbase/trunk/pom.xml * /hbase/trunk/src/site/resources/css/site.css Unable to open the few links in http://hbase.apache.org/ Key: HBASE-5772 URL: https://issues.apache.org/jira/browse/HBASE-5772 Project: HBase Issue Type: Bug Components: documentation Affects Versions: 0.94.0 Reporter: Kiran BC Assignee: stack Fix For: 0.96.0 Attachments: 5772.txt Few links in http://hbase.apache.org/ is not working. For example, Ref Guide (multi-page) will actually link to http://hbase.apache.org/book/book.html and if I try to open this, Page not found error is coming. If I add /book in the url, like http://hbase.apache.org/book/book/book.html, it is taking me to the Apache HBase Reference Guide I think the folder structure has been changed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5784) Enable mvn deploy of website
[ https://issues.apache.org/jira/browse/HBASE-5784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253751#comment-13253751 ] Hudson commented on HBASE-5784: --- Integrated in HBase-TRUNK-security #170 (See [https://builds.apache.org/job/HBase-TRUNK-security/170/]) HBASE-5784 Enable mvn deploy of website (Revision 1325917) Result = FAILURE stack : Files : * /hbase/trunk/pom.xml * /hbase/trunk/src/site/xdoc/index.xml * /hbase/trunk/src/site/xdoc/old_news.xml Enable mvn deploy of website Key: HBASE-5784 URL: https://issues.apache.org/jira/browse/HBASE-5784 Project: HBase Issue Type: Improvement Reporter: stack Assignee: stack Fix For: 0.96.0 Attachments: 5784.txt Up to this, deploy of website has been build local and then copy up to apache and put it into place under /www/hbase.apache.org. Change it so can have maven deploy the site. The good thing about having the latter do it is that its regular; permissions will always be the same so Doug and I won't be fighting each other when we stick stuff up there. Also, its a one step process rather than multiple. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5488) OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property
[ https://issues.apache.org/jira/browse/HBASE-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253749#comment-13253749 ] Hudson commented on HBASE-5488: --- Integrated in HBase-TRUNK-security #170 (See [https://builds.apache.org/job/HBase-TRUNK-security/170/]) HBASE-5488 OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property (gaojinchao) (Revision 1325625) Result = FAILURE jmhsieh : Files : * /hbase/trunk/src/main/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRepair.java OfflineMetaRepair doesn't support hadoop 0.20's fs.default.name property Key: HBASE-5488 URL: https://issues.apache.org/jira/browse/HBASE-5488 Project: HBase Issue Type: Bug Affects Versions: 0.90.6 Reporter: gaojinchao Assignee: gaojinchao Priority: Minor Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0 Attachments: HBASE-5488-branch92.patch, HBASE-5488-trunk.patch, HBASE-5488_branch90.txt, hbase-5488-v2.patch I want to use OfflineMetaRepair tools and found onbody fix this bugs. I will make a patch. 12/01/05 23:23:30 ERROR util.HBaseFsck: Bailed out due to: java.lang.IllegalArgumentException: Wrong FS: hdfs:// us01-ciqps1-name01.carrieriq.com:9000/hbase/M2M-INTEGRATION-MM_TION-13 25190318714/0003d2ede27668737e192d8430dbe5d0/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:368) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:126) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:284) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:398) at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256) at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284) at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402) at org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRe -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira