[jira] [Commented] (HBASE-7643) HFileArchiver.resolveAndArchive() race condition and snapshot data loss
[ https://issues.apache.org/jira/browse/HBASE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559489#comment-13559489 ] Jonathan Hsieh commented on HBASE-7643: --- I misspoke -- Matteo pointed out to me that #3 isn't problem due to compactions but more likely due to splits (compactions only create new files, splits create new dirs and the parent dir is the likely deletion candidate). The high level point still stands -- if a compaction happens while the archiver deletes the directory the rename attempt can fail. HFileArchiver.resolveAndArchive() race condition and snapshot data loss --- Key: HBASE-7643 URL: https://issues.apache.org/jira/browse/HBASE-7643 Project: HBase Issue Type: Bug Affects Versions: hbase-6055, 0.96.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Blocker Fix For: 0.96.0, 0.94.5 * The master have an hfile cleaner thread (that is responsible for cleaning the /hbase/.archive dir) ** /hbase/.archive/table/region/family/hfile ** if the family/region/family directory is empty the cleaner removes it * The master can archive files (from another thread, e.g. DeleteTableHandler) * The region can archive files (from another server/process, e.g. compaction) The simplified file archiving code looks like this: {code} HFileArchiver.resolveAndArchive(...) { // ensure that the archive dir exists fs.mkdir(archiveDir); // move the file to the archiver success = fs.rename(originalPath/fileName, archiveDir/fileName) // if the rename is failed, delete the file without archiving if (!success) fs.delete(originalPath/fileName); } {code} Since there's no synchronization between HFileArchiver.resolveAndArchive() and the cleaner run (different process, thread, ...) you can end up in the situation where you are moving something in a directory that doesn't exists. {code} fs.mkdir(archiveDir); // HFileCleaner chore starts at this point // and the archiveDirectory that we just ensured to be present gets removed. // The rename at this point will fail since the parent directory is missing. success = fs.rename(originalPath/fileName, archiveDir/fileName) {code} The bad thing of deleting the file without archiving is that if you've a snapshot that relies on the file to be present, or you've a clone table that relies on that file is that you're losing data. Possible solutions * Create a ZooKeeper lock, to notify the master (Hey I'm archiving something, wait a bit) * Add a RS - Master call to let the master removes files and avoid this kind of situations * Avoid to remove empty directories from the archive if the table exists or is not disabled * Add a try catch around the fs.rename The last one, the easiest one, looks like: {code} for (int i = 0; i retries; ++i) { // ensure archive directory to be present fs.mkdir(archiveDir); // possible race - // try to archive file success = fs.rename(originalPath/fileName, archiveDir/fileName); if (success) break; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
Ted Yu created HBASE-7644: - Summary: Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7594: -- Status: Patch Available (was: Open) TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559735#comment-13559735 ] Hadoop QA commented on HBASE-7594: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565909/7594-3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.constraint.TestConstraint org.apache.hadoop.hbase.TestLocalHBaseCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4124//console This message is automatically generated. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at
[jira] [Updated] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row
[ https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-5664: -- Status: Open (was: Patch Available) CP hooks in Scan flow for fast forward when filter filters out a row Key: HBASE-5664 URL: https://issues.apache.org/jira/browse/HBASE-5664 Project: HBase Issue Type: Improvement Components: Coprocessors, Filters Affects Versions: 0.92.1 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.96.0, 0.94.5 Attachments: HBASE-5664_94.patch, HBASE-5664_94_V2.patch, HBASE-5664_Trunk.patch In HRegion.nextInternal(int limit, String metric) We have while(true) loop so as to fetch a next result which satisfies filter condition. When Filter filters out the current fetched row we call nextRow(byte [] currentRow) before going with the next row. {code} if (results.isEmpty() || filterRow()) { // this seems like a redundant step - we already consumed the row // there're no left overs. // the reasons for calling this method are: // 1. reset the filters. // 2. provide a hook to fast forward the row (used by subclasses) nextRow(currentRow); {code} // 2. provide a hook to fast forward the row (used by subclasses) We can provide same feature of fast forward support for the CP also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row
[ https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-5664: -- Attachment: HBASE-5664_94_V2.patch CP hooks in Scan flow for fast forward when filter filters out a row Key: HBASE-5664 URL: https://issues.apache.org/jira/browse/HBASE-5664 Project: HBase Issue Type: Improvement Components: Coprocessors, Filters Affects Versions: 0.92.1 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.96.0, 0.94.5 Attachments: HBASE-5664_94.patch, HBASE-5664_94_V2.patch, HBASE-5664_Trunk.patch In HRegion.nextInternal(int limit, String metric) We have while(true) loop so as to fetch a next result which satisfies filter condition. When Filter filters out the current fetched row we call nextRow(byte [] currentRow) before going with the next row. {code} if (results.isEmpty() || filterRow()) { // this seems like a redundant step - we already consumed the row // there're no left overs. // the reasons for calling this method are: // 1. reset the filters. // 2. provide a hook to fast forward the row (used by subclasses) nextRow(currentRow); {code} // 2. provide a hook to fast forward the row (used by subclasses) We can provide same feature of fast forward support for the CP also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row
[ https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-5664: -- Attachment: HBASE-5664_Trunk_V2.patch CP hooks in Scan flow for fast forward when filter filters out a row Key: HBASE-5664 URL: https://issues.apache.org/jira/browse/HBASE-5664 Project: HBase Issue Type: Improvement Components: Coprocessors, Filters Affects Versions: 0.92.1 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.96.0, 0.94.5 Attachments: HBASE-5664_94.patch, HBASE-5664_94_V2.patch, HBASE-5664_Trunk.patch, HBASE-5664_Trunk_V2.patch In HRegion.nextInternal(int limit, String metric) We have while(true) loop so as to fetch a next result which satisfies filter condition. When Filter filters out the current fetched row we call nextRow(byte [] currentRow) before going with the next row. {code} if (results.isEmpty() || filterRow()) { // this seems like a redundant step - we already consumed the row // there're no left overs. // the reasons for calling this method are: // 1. reset the filters. // 2. provide a hook to fast forward the row (used by subclasses) nextRow(currentRow); {code} // 2. provide a hook to fast forward the row (used by subclasses) We can provide same feature of fast forward support for the CP also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row
[ https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-5664: -- Status: Patch Available (was: Open) CP hooks in Scan flow for fast forward when filter filters out a row Key: HBASE-5664 URL: https://issues.apache.org/jira/browse/HBASE-5664 Project: HBase Issue Type: Improvement Components: Coprocessors, Filters Affects Versions: 0.92.1 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.96.0, 0.94.5 Attachments: HBASE-5664_94.patch, HBASE-5664_94_V2.patch, HBASE-5664_Trunk.patch, HBASE-5664_Trunk_V2.patch In HRegion.nextInternal(int limit, String metric) We have while(true) loop so as to fetch a next result which satisfies filter condition. When Filter filters out the current fetched row we call nextRow(byte [] currentRow) before going with the next row. {code} if (results.isEmpty() || filterRow()) { // this seems like a redundant step - we already consumed the row // there're no left overs. // the reasons for calling this method are: // 1. reset the filters. // 2. provide a hook to fast forward the row (used by subclasses) nextRow(currentRow); {code} // 2. provide a hook to fast forward the row (used by subclasses) We can provide same feature of fast forward support for the CP also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
[ https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559768#comment-13559768 ] nkeywal commented on HBASE-7637: I don't understand this: -phasecompile/phase +phasetest/phase Does it mean we would need to do mvn test -DskipTests before being able to do bin/startHbase.sh? hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0 Key: HBASE-7637 URL: https://issues.apache.org/jira/browse/HBASE-7637 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.96.0 Reporter: nkeywal Assignee: Elliott Clark Priority: Critical Fix For: 0.96.0 Attachments: HBASE-7637-0.patch I'm unclear on the root cause / fix. Here is the scenario: {noformat} mvn clean package install -Dhadoop.profile=2.0 -DskipTests bin/start-hbase.sh {noformat} fails with {noformat} Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.lib.MetricMutable at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) {noformat} doing {noformat} rm -rf hbase-hadoop1-compat/target/ {noformat} makes it work. In the pom.xml, we never reference hadoop2-compat. But doing so does not help: hadoop1-compat is compiled and takes precedence over hadoop2... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-7644: - Assignee: Ted Yu Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 --- Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Attachments: 7644-disable-show-table.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7644: -- Fix Version/s: 0.94.5 Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 --- Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.94.5 Attachments: 7644-disable-show-table.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7644: -- Attachment: 7644-disable-show-table.patch Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 --- Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu Attachments: 7644-disable-show-table.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7645) put without timestamp duplicates the record/row
Guido Serra aka Zeph created HBASE-7645: --- Summary: put without timestamp duplicates the record/row Key: HBASE-7645 URL: https://issues.apache.org/jira/browse/HBASE-7645 Project: HBase Issue Type: Brainstorming Components: Client Reporter: Guido Serra aka Zeph if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} /** * Add the specified column and value to this Put operation. * @param family family name * @param qualifier column qualifier * @param value column value * @return this */ public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } /** * Add the specified column and value, with the specified timestamp as * its version to this Put operation. * @param family family name * @param qualifier column qualifier * @param ts version timestamp * @param value column value * @return this */ public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7645) put without timestamp duplicates the record/row
[ https://issues.apache.org/jira/browse/HBASE-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guido Serra aka Zeph updated HBASE-7645: Description: if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} was: if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} /** * Add the specified column and value to this Put operation. * @param family family name * @param qualifier column qualifier * @param value column value * @return this */ public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } /** * Add the specified column and value, with the specified timestamp as * its version to this Put operation. * @param family family name * @param qualifier column qualifier * @param ts version timestamp * @param value column value * @return this */ public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} put without timestamp duplicates the record/row --- Key: HBASE-7645 URL: https://issues.apache.org/jira/browse/HBASE-7645 Project: HBase Issue Type: Brainstorming Components: Client Reporter: Guido Serra aka Zeph if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking
[jira] [Updated] (HBASE-7645) put without timestamp duplicates the record/row
[ https://issues.apache.org/jira/browse/HBASE-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guido Serra aka Zeph updated HBASE-7645: Description: if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? I mean, what's the idea behind? Shall it be SQOOP (the client application) supposed to handle the read on the value before issuing an add() statement call? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} was: if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} put without timestamp duplicates the record/row --- Key: HBASE-7645 URL: https://issues.apache.org/jira/browse/HBASE-7645 Project: HBase Issue Type: Brainstorming Components: Client Reporter: Guido Serra aka Zeph if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? I mean, what's the idea behind? Shall it be SQOOP (the client application) supposed to handle the read on the value before issuing an add() statement call? from:
[jira] [Commented] (HBASE-7645) put without timestamp duplicates the record/row
[ https://issues.apache.org/jira/browse/HBASE-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559780#comment-13559780 ] Anoop Sam John commented on HBASE-7645: --- When you put data without specifying a timestamp, HBase will assign a TS for the insert. This will be the system time at the RS. So when you do put with the same data set it can get a new TS making it a new version. This is expected behavior only. put without timestamp duplicates the record/row --- Key: HBASE-7645 URL: https://issues.apache.org/jira/browse/HBASE-7645 Project: HBase Issue Type: Brainstorming Components: Client Reporter: Guido Serra aka Zeph if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? I mean, what's the idea behind? Shall it be SQOOP (the client application) supposed to handle the read on the value before issuing an add() statement call? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7645) put without timestamp duplicates the record/row
[ https://issues.apache.org/jira/browse/HBASE-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guido Serra aka Zeph updated HBASE-7645: Priority: Trivial (was: Major) put without timestamp duplicates the record/row --- Key: HBASE-7645 URL: https://issues.apache.org/jira/browse/HBASE-7645 Project: HBase Issue Type: Brainstorming Components: Client Reporter: Guido Serra aka Zeph Priority: Trivial if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? I mean, what's the idea behind? Shall it be SQOOP (the client application) supposed to handle the read on the value before issuing an add() statement call? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7645) put without timestamp duplicates the record/row
[ https://issues.apache.org/jira/browse/HBASE-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559783#comment-13559783 ] Guido Serra aka Zeph commented on HBASE-7645: - [~anoopsamjohn] uh, ok... so that is then expect. Thanks for clarifying :) I'll handle it on client side then put without timestamp duplicates the record/row --- Key: HBASE-7645 URL: https://issues.apache.org/jira/browse/HBASE-7645 Project: HBase Issue Type: Brainstorming Components: Client Reporter: Guido Serra aka Zeph Priority: Trivial if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? I mean, what's the idea behind? Shall it be SQOOP (the client application) supposed to handle the read on the value before issuing an add() statement call? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-7645) put without timestamp duplicates the record/row
[ https://issues.apache.org/jira/browse/HBASE-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guido Serra aka Zeph resolved HBASE-7645. - Resolution: Not A Problem put without timestamp duplicates the record/row --- Key: HBASE-7645 URL: https://issues.apache.org/jira/browse/HBASE-7645 Project: HBase Issue Type: Brainstorming Components: Client Reporter: Guido Serra aka Zeph Priority: Trivial if I call a couple of times SQOOP on the same dataset, outputting to HBase, I will end up with duplicated data... {code} hbase(main):030:0 get dump_HKFAS.sales_order, 1, {COLUMN = mysql:created_at, VERSIONS = 4} COLUMN CELL mysql:created_at timestamp=1358853505756, value=2011-12-21 18:07:38.0 mysql:created_at timestamp=1358790515451, value=2011-12-21 18:07:38.0 2 row(s) in 0.0040 seconds today's sqoop run hbase(main):031:0 Date.new(1358853505756).toString() = Tue Jan 22 11:18:25 UTC 2013 yesterday's sqoop run hbase(main):032:0 Date.new(1358790515451).toString() = Mon Jan 21 17:48:35 UTC 2013 {code} the fact that the Put.add() method writes the kv without checking if, apart of the timestamp, the value has not changed, is it by design? or a bug? I mean, what's the idea behind? Shall it be SQOOP (the client application) supposed to handle the read on the value before issuing an add() statement call? from: trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/Put.java {code} public Put add(byte [] family, byte [] qualifier, byte [] value) { return add(family, qualifier, this.ts, value); } public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) { ListKeyValue list = getKeyValueList(family); KeyValue kv = createPutKeyValue(family, qualifier, ts, value); list.add(kv); familyMap.put(kv.getFamily(), list); return this; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7633) Add a metric that tracks the current number of used RPC threads on the regionservers
[ https://issues.apache.org/jira/browse/HBASE-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Otis Gospodnetic updated HBASE-7633: Component/s: metrics Add a metric that tracks the current number of used RPC threads on the regionservers Key: HBASE-7633 URL: https://issues.apache.org/jira/browse/HBASE-7633 Project: HBase Issue Type: Improvement Components: metrics Reporter: Joey Echeverria Assignee: Elliott Clark One way to detect that you're hitting a John Wayne disk[1] would be if we could see when region servers exhausted their RPC handlers. This would also be useful when tuning the cluster for your workload to make sure that reads or writes were not starving the other operations out. [1] http://hbase.apache.org/book.html#bad.disk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row
[ https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559803#comment-13559803 ] Ted Yu commented on HBASE-5664: --- Patch v2 looks good. Please add release notes. CP hooks in Scan flow for fast forward when filter filters out a row Key: HBASE-5664 URL: https://issues.apache.org/jira/browse/HBASE-5664 Project: HBase Issue Type: Improvement Components: Coprocessors, Filters Affects Versions: 0.92.1 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.96.0, 0.94.5 Attachments: HBASE-5664_94.patch, HBASE-5664_94_V2.patch, HBASE-5664_Trunk.patch, HBASE-5664_Trunk_V2.patch In HRegion.nextInternal(int limit, String metric) We have while(true) loop so as to fetch a next result which satisfies filter condition. When Filter filters out the current fetched row we call nextRow(byte [] currentRow) before going with the next row. {code} if (results.isEmpty() || filterRow()) { // this seems like a redundant step - we already consumed the row // there're no left overs. // the reasons for calling this method are: // 1. reset the filters. // 2. provide a hook to fast forward the row (used by subclasses) nextRow(currentRow); {code} // 2. provide a hook to fast forward the row (used by subclasses) We can provide same feature of fast forward support for the CP also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7637) hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0
[ https://issues.apache.org/jira/browse/HBASE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559806#comment-13559806 ] Elliott Clark commented on HBASE-7637: -- package, install, or test would work as they all imply test. Right now, regardless of this patch, compile is broken because of intermodule dependencies so we're no really losing anything. Everything that could compile the project implies test. I moved the phase to test in order to duplicate what add_maven_test_classes_to_classpath used to give us. hbase-hadoop1-compat conflicts with -Dhadoop.profile=2.0 Key: HBASE-7637 URL: https://issues.apache.org/jira/browse/HBASE-7637 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.96.0 Reporter: nkeywal Assignee: Elliott Clark Priority: Critical Fix For: 0.96.0 Attachments: HBASE-7637-0.patch I'm unclear on the root cause / fix. Here is the scenario: {noformat} mvn clean package install -Dhadoop.profile=2.0 -DskipTests bin/start-hbase.sh {noformat} fails with {noformat} Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.metrics2.lib.MetricMutable at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) {noformat} doing {noformat} rm -rf hbase-hadoop1-compat/target/ {noformat} makes it work. In the pom.xml, we never reference hadoop2-compat. But doing so does not help: hadoop1-compat is compiled and takes precedence over hadoop2... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5664) CP hooks in Scan flow for fast forward when filter filters out a row
[ https://issues.apache.org/jira/browse/HBASE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559814#comment-13559814 ] Hadoop QA commented on HBASE-5664: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565967/HBASE-5664_Trunk_V2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestLocalHBaseCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4125//console This message is automatically generated. CP hooks in Scan flow for fast forward when filter filters out a row Key: HBASE-5664 URL: https://issues.apache.org/jira/browse/HBASE-5664 Project: HBase Issue Type: Improvement Components: Coprocessors, Filters Affects Versions: 0.92.1 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.96.0, 0.94.5 Attachments: HBASE-5664_94.patch, HBASE-5664_94_V2.patch, HBASE-5664_Trunk.patch, HBASE-5664_Trunk_V2.patch In HRegion.nextInternal(int limit, String metric) We have while(true) loop so as to fetch a next result which satisfies filter condition. When Filter filters out the current fetched row we call nextRow(byte [] currentRow) before going with the next row. {code} if (results.isEmpty() || filterRow()) { // this seems like a redundant step - we already consumed the row // there're no left overs. // the reasons for calling this method are: // 1. reset the filters. // 2. provide a hook to fast forward the row (used by subclasses) nextRow(currentRow); {code} // 2. provide a hook to fast forward the row (used by subclasses) We can provide same feature of fast forward support for the CP also. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559827#comment-13559827 ] Sergey Shelukhin commented on HBASE-7644: - +1 Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 --- Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.94.5 Attachments: 7644-disable-show-table.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559834#comment-13559834 ] Sergey Shelukhin commented on HBASE-7588: - The imports added, and the blank line w/spaces before toString, seem unnecessary. Is any sort of compare really used on WakeupFlushThread? Otherwise +1. Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time
[ https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559835#comment-13559835 ] Sergey Shelukhin commented on HBASE-7503: - Btw my above comments (other than on closestRowBefore) are a nit, shouldn't block. Add exists(List) in HTableInterface to allow multiple parallel exists at one time - Key: HBASE-7503 URL: https://issues.apache.org/jira/browse/HBASE-7503 Project: HBase Issue Type: Improvement Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, HBASE-7503-v11-trunk.patch, HBASE-7503-v1-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch Original Estimate: 5m Remaining Estimate: 5m We need to have a Boolean[] exists(ListGet gets) throws IOException method implemented in HTableInterface. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559841#comment-13559841 ] Ted Yu commented on HBASE-7588: --- {code} + if (this == obj) +return true; + if (obj == null || getClass() != obj.getClass()) +return false; {code} style: move the return statement to the end of line above. Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559850#comment-13559850 ] Ted Yu commented on HBASE-3996: --- @Bryan: Can you attach latest patch to this issue ? Thanks Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.96.0, 0.94.5 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559853#comment-13559853 ] Lars Hofhansl commented on HBASE-7644: -- +1 Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 --- Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.94.5 Attachments: 7644-disable-show-table.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Baugher updated HBASE-3996: - Attachment: 3996-v13.txt Attached latest patch address review comments Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.96.0, 0.94.5 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v13.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6815) [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode
[ https://issues.apache.org/jira/browse/HBASE-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559858#comment-13559858 ] Slavik Krassovsky commented on HBASE-6815: -- Thanks Enis! Addressed both comments - please take a look at v3 of the patch. [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode Key: HBASE-6815 URL: https://issues.apache.org/jira/browse/HBASE-6815 Project: HBase Issue Type: Sub-task Affects Versions: 0.94.3, 0.96.0 Reporter: Enis Soztutar Assignee: Slavik Krassovsky Attachments: hbase-6815_v1.patch, hbase-6815_v2.patch, hbase-6815_v3.patch Provide .cmd scripts in order to start HBASE on Windows in a single user mode -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6815) [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode
[ https://issues.apache.org/jira/browse/HBASE-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Slavik Krassovsky updated HBASE-6815: - Attachment: hbase-6815_v3.patch [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode Key: HBASE-6815 URL: https://issues.apache.org/jira/browse/HBASE-6815 Project: HBase Issue Type: Sub-task Affects Versions: 0.94.3, 0.96.0 Reporter: Enis Soztutar Assignee: Slavik Krassovsky Attachments: hbase-6815_v1.patch, hbase-6815_v2.patch, hbase-6815_v3.patch Provide .cmd scripts in order to start HBASE on Windows in a single user mode -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7268) correct local region location cache information can be overwritten w/stale information from an old server
[ https://issues.apache.org/jira/browse/HBASE-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559859#comment-13559859 ] Sergey Shelukhin commented on HBASE-7268: - [~stack] Hmm, I forgot about that issue. I will try to get back to it soon. Last time I checked having this fix caused the Targeted test to fail later on minicluster, although it passed for me on real cluster yesterday. correct local region location cache information can be overwritten w/stale information from an old server - Key: HBASE-7268 URL: https://issues.apache.org/jira/browse/HBASE-7268 Project: HBase Issue Type: Bug Affects Versions: 0.96.0 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Fix For: 0.96.0 Attachments: 7268-v6.patch, 7268-v8.patch, HBASE-7268-v0.patch, HBASE-7268-v0.patch, HBASE-7268-v1.patch, HBASE-7268-v2.patch, HBASE-7268-v2-plus-masterTs.patch, HBASE-7268-v2-plus-masterTs.patch, HBASE-7268-v3.patch, HBASE-7268-v4.patch, HBASE-7268-v5.patch, HBASE-7268-v6.patch, HBASE-7268-v7.patch, HBASE-7268-v8.patch, HBASE-7268-v9.patch Discovered via HBASE-7250; related to HBASE-5877. Test is writing from multiple threads. Server A has region R; client knows that. R gets moved from A to server B. B gets killed. R gets moved by master to server C. ~15 seconds later, client tries to write to it (on A?). Multiple client threads report from RegionMoved exception processing logic R moved from C to B, even though such transition never happened (neither in nor before the sequence described below). Not quite sure how the client learned of the transition to C, I assume it's from meta from some other thread... Then, put fails (it may fail due to accumulated errors that are not logged, which I am investigating... but the bogus cache update is there nonwithstanding). I have a patch but not sure if it works, test still fails locally for yet unknown reason. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559862#comment-13559862 ] Ted Yu commented on HBASE-7644: --- Integrated to 0.94 branch. Thanks for the review, Sergey and Lars. Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 --- Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.94.5 Attachments: 7644-disable-show-table.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559863#comment-13559863 ] Jean-Marc Spaggiari commented on HBASE-7588: Hi Ted, I'm sometime asked to add brackets, sometime to put on the same line. Which should I use? {code} + if (obj == null || getClass() != obj.getClass()) return false; {code} or {code} + if (obj == null || getClass() != obj.getClass()) { +return false; + } {code} Personnaly I prefer the 2nd one. Found it safer. Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7571) add the notion of per-table or per-column family configuration
[ https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559868#comment-13559868 ] Sergey Shelukhin commented on HBASE-7571: - Thanks! bq. I wonder though if HTableDescriptor.getDefaultValues needs to be removed? It's not used, so I removed it... The rest fixed. add the notion of per-table or per-column family configuration -- Key: HBASE-7571 URL: https://issues.apache.org/jira/browse/HBASE-7571 Project: HBase Issue Type: Sub-task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-7571-v0-based-on-HBASE-7563.patch, HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch Main part of split HBASE-7236. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559869#comment-13559869 ] Ted Yu commented on HBASE-7588: --- Go ahead with second style. Both are accepted. My comment was based on the fact that the combined line would be within 100 characters limit. Thanks Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-7594: -- Status: Open (was: Patch Available) TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-7594: -- Attachment: 7594-4.patch Really treat the symptom this time, see what else fails. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-7594: -- Status: Patch Available (was: Open) TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-7642) HBase shell cannot set Compression
[ https://issues.apache.org/jira/browse/HBASE-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar resolved HBASE-7642. -- Resolution: Fixed Hadoop Flags: Reviewed Committed. Thanks for reviews. HBase shell cannot set Compression -- Key: HBASE-7642 URL: https://issues.apache.org/jira/browse/HBASE-7642 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.96.0 Attachments: hbase-7642_v1.patch HBASE-7063 changed the package name for Compression class, but failed to update admin.rb for shell. {code} hbase(main):005:0 alter 'cluster_test', {NAME='test_cf', DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='GZ'} ERROR: cannot load Java class org.apache.hadoop.hbase.io.hfile.Compression {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7571) add the notion of per-table or per-column family configuration
[ https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-7571: Attachment: HBASE-7571-v2.patch CR feedback; fix javadoc warnings add the notion of per-table or per-column family configuration -- Key: HBASE-7571 URL: https://issues.apache.org/jira/browse/HBASE-7571 Project: HBase Issue Type: Sub-task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-7571-v0-based-on-HBASE-7563.patch, HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, HBASE-7571-v2.patch Main part of split HBASE-7236. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
Jimmy Xiang created HBASE-7646: -- Summary: Make forkedProcessTimeoutInSeconds configurable Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7631) Region w/ few, fat reads was hard to find on a box carrying hundreds of regions
[ https://issues.apache.org/jira/browse/HBASE-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559917#comment-13559917 ] Jean-Daniel Cryans commented on HBASE-7631: --- Things like slow query log should have region encoded name and some row key. And +1 on printing when we are returning a big response, with details. Region w/ few, fat reads was hard to find on a box carrying hundreds of regions --- Key: HBASE-7631 URL: https://issues.apache.org/jira/browse/HBASE-7631 Project: HBase Issue Type: Bug Components: metrics Reporter: stack Of a sudden on a prod cluster, a table's rows gained girth... hundreds of thousands of rows... and the application was pulling them all back every time but only once a second or so. Regionserver was carrying hundreds of regions. Was plain that there was lots of network out traffic. It was tough figuring which region was the culprit (JD's trick was moving the regions off one at a time while watching network out traffic on cluster to see whose spiked next -- it worked but just some time). If we had per region read/write sizes in metrics, that would have saved a bunch of diagnostic time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7646: --- Attachment: trunk-7646.patch Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Attachments: trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7646: --- Status: Patch Available (was: Open) Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Attachments: trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6466) Enable multi-thread for memstore flush
[ https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-6466: Attachment: HBASE-6466-v5.patch Rebasing the patch. Should be ready to commit based on the above; medium tests pass for me, but I'd wait for Hadoop QA. Enable multi-thread for memstore flush -- Key: HBASE-6466 URL: https://issues.apache.org/jira/browse/HBASE-6466 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.96.0 Reporter: chunhui shen Assignee: chunhui shen Priority: Critical Fix For: 0.96.0 Attachments: HBASE-6466.patch, HBASE-6466v2.patch, HBASE-6466v3.1.patch, HBASE-6466v3.patch, HBASE-6466-v4.patch, HBASE-6466-v4.patch, HBASE-6466-v5.patch If the KV is large or Hlog is closed with high-pressure putting, we found memstore is often above the high water mark and block the putting. So should we enable multi-thread for Memstore Flush? Some performance test data for reference, 1.test environment : random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 regions per regionserver;row len=50 bytes, value len=1024 bytes;5 regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler per client for writing 2.test results: one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per regionserver, appears many aboveGlobalMemstoreLimit blocking two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per regionserver, 200 thread handler per client two cacheFlush handlers, tps:16.1k/s per regionserver, Flush:18.6MB/s per regionserver -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559922#comment-13559922 ] stack commented on HBASE-7594: -- I just enabled saving test output. Hopefully will show up in future builds. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559923#comment-13559923 ] Hadoop QA commented on HBASE-3996: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565984/3996-v13.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestLocalHBaseCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4126//console This message is automatically generated. Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.96.0, 0.94.5 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v13.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559924#comment-13559924 ] stack commented on HBASE-7646: -- +1 Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Attachments: 0.94-7646.patch, trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7646: --- Attachment: 0.94-7646.patch Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Attachments: 0.94-7646.patch, trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7588: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks JM. Committed to trunk. Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reopened HBASE-7588: -- Reopened. Premature commit. Pardon me. Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559934#comment-13559934 ] Ted Yu commented on HBASE-3996: --- If you search https://builds.apache.org/job/PreCommit-HBASE-Build/4126//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.xml for MultiTableInputFormatBase, you would see the following: BugInstance type=DMI_INVOKING_TOSTRING_ON_ARRAY priority=2 abbrev=USELESS_STRING category=CORRECTNESS Class classname=org.apache.hadoop.hbase.mapreduce.MultiTableInputFormatBase SourceLine classname=org.apache.hadoop.hbase.mapreduce.MultiTableInputFormatBase start=47 end=214 sourcefile=MultiTableInputFormatBase.java sourcepath=org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java/ /Class Please address the above warning. Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.96.0, 0.94.5 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v13.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6815) [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode
[ https://issues.apache.org/jira/browse/HBASE-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559938#comment-13559938 ] Hadoop QA commented on HBASE-6815: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565985/hbase-6815_v3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.wal.TestHLog Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4127//console This message is automatically generated. [WINDOWS] Provide hbase scripts in order to start HBASE on Windows in a single user mode Key: HBASE-6815 URL: https://issues.apache.org/jira/browse/HBASE-6815 Project: HBase Issue Type: Sub-task Affects Versions: 0.94.3, 0.96.0 Reporter: Enis Soztutar Assignee: Slavik Krassovsky Attachments: hbase-6815_v1.patch, hbase-6815_v2.patch, hbase-6815_v3.patch Provide .cmd scripts in order to start HBASE on Windows in a single user mode -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7268) correct local region location cache information can be overwritten w/stale information from an old server
[ https://issues.apache.org/jira/browse/HBASE-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559936#comment-13559936 ] stack commented on HBASE-7268: -- [~sershe] I tried it and yeah, still fails. This test is gating our being able to try something up on bigtop infra. No pressure. Smile. correct local region location cache information can be overwritten w/stale information from an old server - Key: HBASE-7268 URL: https://issues.apache.org/jira/browse/HBASE-7268 Project: HBase Issue Type: Bug Affects Versions: 0.96.0 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Fix For: 0.96.0 Attachments: 7268-v6.patch, 7268-v8.patch, HBASE-7268-v0.patch, HBASE-7268-v0.patch, HBASE-7268-v1.patch, HBASE-7268-v2.patch, HBASE-7268-v2-plus-masterTs.patch, HBASE-7268-v2-plus-masterTs.patch, HBASE-7268-v3.patch, HBASE-7268-v4.patch, HBASE-7268-v5.patch, HBASE-7268-v6.patch, HBASE-7268-v7.patch, HBASE-7268-v8.patch, HBASE-7268-v9.patch Discovered via HBASE-7250; related to HBASE-5877. Test is writing from multiple threads. Server A has region R; client knows that. R gets moved from A to server B. B gets killed. R gets moved by master to server C. ~15 seconds later, client tries to write to it (on A?). Multiple client threads report from RegionMoved exception processing logic R moved from C to B, even though such transition never happened (neither in nor before the sequence described below). Not quite sure how the client learned of the transition to C, I assume it's from meta from some other thread... Then, put fails (it may fail due to accumulated errors that are not logged, which I am investigating... but the bogus cache update is there nonwithstanding). I have a patch but not sure if it works, test still fails locally for yet unknown reason. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7643) HFileArchiver.resolveAndArchive() race condition and snapshot data loss
[ https://issues.apache.org/jira/browse/HBASE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-7643: --- Attachment: HBASE-7653-p4-v0.patch p4-v0 is the 4th proposal implementation. Still trying to figure out what is the best way to test this bug, maybe a thread that does while (true) hfileCleaner.chore() and the other one that tries to archive files... HFileArchiver.resolveAndArchive() race condition and snapshot data loss --- Key: HBASE-7643 URL: https://issues.apache.org/jira/browse/HBASE-7643 Project: HBase Issue Type: Bug Affects Versions: hbase-6055, 0.96.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Blocker Fix For: 0.96.0, 0.94.5 Attachments: HBASE-7653-p4-v0.patch * The master have an hfile cleaner thread (that is responsible for cleaning the /hbase/.archive dir) ** /hbase/.archive/table/region/family/hfile ** if the family/region/family directory is empty the cleaner removes it * The master can archive files (from another thread, e.g. DeleteTableHandler) * The region can archive files (from another server/process, e.g. compaction) The simplified file archiving code looks like this: {code} HFileArchiver.resolveAndArchive(...) { // ensure that the archive dir exists fs.mkdir(archiveDir); // move the file to the archiver success = fs.rename(originalPath/fileName, archiveDir/fileName) // if the rename is failed, delete the file without archiving if (!success) fs.delete(originalPath/fileName); } {code} Since there's no synchronization between HFileArchiver.resolveAndArchive() and the cleaner run (different process, thread, ...) you can end up in the situation where you are moving something in a directory that doesn't exists. {code} fs.mkdir(archiveDir); // HFileCleaner chore starts at this point // and the archiveDirectory that we just ensured to be present gets removed. // The rename at this point will fail since the parent directory is missing. success = fs.rename(originalPath/fileName, archiveDir/fileName) {code} The bad thing of deleting the file without archiving is that if you've a snapshot that relies on the file to be present, or you've a clone table that relies on that file is that you're losing data. Possible solutions * Create a ZooKeeper lock, to notify the master (Hey I'm archiving something, wait a bit) * Add a RS - Master call to let the master removes files and avoid this kind of situations * Avoid to remove empty directories from the archive if the table exists or is not disabled * Add a try catch around the fs.rename The last one, the easiest one, looks like: {code} for (int i = 0; i retries; ++i) { // ensure archive directory to be present fs.mkdir(archiveDir); // possible race - // try to archive file success = fs.rename(originalPath/fileName, archiveDir/fileName); if (success) break; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559941#comment-13559941 ] Hadoop QA commented on HBASE-7646: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565996/0.94-7646.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4130//console This message is automatically generated. Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Attachments: 0.94-7646.patch, trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559944#comment-13559944 ] Hadoop QA commented on HBASE-7594: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565988/7594-4.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestLocalHBaseCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4128//console This message is automatically generated. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at
[jira] [Commented] (HBASE-7643) HFileArchiver.resolveAndArchive() race condition and snapshot data loss
[ https://issues.apache.org/jira/browse/HBASE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559949#comment-13559949 ] Ted Yu commented on HBASE-7643: --- {code} success = false; + ioe = e; {code} The success flag is no longer needed - we have ioe now. {code} +// (we're in a retry loop, so don't warry too much about the exception) {code} typo: warry. {code} + LOG.warn(Failed to create the archive directory: + archiveDir, e); {code} What if the IOE came from fs.exists() call ? HFileArchiver.resolveAndArchive() race condition and snapshot data loss --- Key: HBASE-7643 URL: https://issues.apache.org/jira/browse/HBASE-7643 Project: HBase Issue Type: Bug Affects Versions: hbase-6055, 0.96.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Blocker Fix For: 0.96.0, 0.94.5 Attachments: HBASE-7653-p4-v0.patch * The master have an hfile cleaner thread (that is responsible for cleaning the /hbase/.archive dir) ** /hbase/.archive/table/region/family/hfile ** if the family/region/family directory is empty the cleaner removes it * The master can archive files (from another thread, e.g. DeleteTableHandler) * The region can archive files (from another server/process, e.g. compaction) The simplified file archiving code looks like this: {code} HFileArchiver.resolveAndArchive(...) { // ensure that the archive dir exists fs.mkdir(archiveDir); // move the file to the archiver success = fs.rename(originalPath/fileName, archiveDir/fileName) // if the rename is failed, delete the file without archiving if (!success) fs.delete(originalPath/fileName); } {code} Since there's no synchronization between HFileArchiver.resolveAndArchive() and the cleaner run (different process, thread, ...) you can end up in the situation where you are moving something in a directory that doesn't exists. {code} fs.mkdir(archiveDir); // HFileCleaner chore starts at this point // and the archiveDirectory that we just ensured to be present gets removed. // The rename at this point will fail since the parent directory is missing. success = fs.rename(originalPath/fileName, archiveDir/fileName) {code} The bad thing of deleting the file without archiving is that if you've a snapshot that relies on the file to be present, or you've a clone table that relies on that file is that you're losing data. Possible solutions * Create a ZooKeeper lock, to notify the master (Hey I'm archiving something, wait a bit) * Add a RS - Master call to let the master removes files and avoid this kind of situations * Avoid to remove empty directories from the archive if the table exists or is not disabled * Add a try catch around the fs.rename The last one, the easiest one, looks like: {code} for (int i = 0; i retries; ++i) { // ensure archive directory to be present fs.mkdir(archiveDir); // possible race - // try to archive file success = fs.rename(originalPath/fileName, archiveDir/fileName); if (success) break; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559957#comment-13559957 ] Andrew Purtell commented on HBASE-7594: --- Thanks. Now we can see what's actually going on after treating the symptom in FixedFileTrailer: I'm guessing a v2 HFile is being opened as a v2? Hence the trouble with the comparator. {noformat} 2013-01-22 19:42:47,914 ERROR [RS_OPEN_ROOT-asf000.sp2.ygridcore.net,40563,1358883761837-0] handler.OpenRegionHandler(443): Failed open of region=-ROOT-,,0.70236052, starting to roll back the global memstore size. java.io.IOException: java.io.IOException: java.io.IOException: Expected block of type ROOT_INDEX but found DATA at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:616) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:537) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4033) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3983) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: Expected block of type ROOT_INDEX but found DATA at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:466) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:220) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3016) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:589) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:587) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: Expected block of type ROOT_INDEX but found DATA at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1215) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:129) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:442) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:438) ... 8 more {noformat} So this is crapping out trying to open some store file previously flushed for -ROOT-. Detritus from an earlier test? Right above we can see it's using /tmp/hbase-jenkins: {noformat} 2013-01-22 19:42:47,811 DEBUG [RS_OPEN_ROOT-asf000.sp2.ygridcore.net,40563,1358883761837-0] wal.SequenceFileLogWriter(193): Path=file:/tmp/hbase-jenkins/hbase/.logs/asf000.sp2.ygridcore.net,40563,1358883761837/asf000.sp2.ygridcore.net%2C40563%2C1358883761837.1358883767809.meta, compression=false {noformat} Maybe picking up some junk. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at
[jira] [Updated] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7646: --- Resolution: Fixed Fix Version/s: 0.94.5 0.96.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Fix For: 0.96.0, 0.94.5 Attachments: 0.94-7646.patch, trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559958#comment-13559958 ] Jimmy Xiang commented on HBASE-7646: I verified I can change forkedProcessTimeoutInSeconds with surefire.timeout from command line now. Thanks Stack for the review. Integrated into trunk and 0.94. Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Attachments: 0.94-7646.patch, trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559960#comment-13559960 ] Andrew Purtell commented on HBASE-7594: --- I have to head out for a bit but will try next a patch that fixes TestLocalHBaseCluster to not use /tmp/hbase-${user.name} as the data dir. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559961#comment-13559961 ] Andrew Purtell commented on HBASE-7594: --- By the way thanks [~saint@gmail.com] for enabling this. Now we can throw things a Jenkins specials without needing to patch trunk, via patches on precommit builds. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-7594) TestLocalHBaseCluster failing on ubuntu2
[ https://issues.apache.org/jira/browse/HBASE-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559961#comment-13559961 ] Andrew Purtell edited comment on HBASE-7594 at 1/22/13 8:08 PM: By the way thanks [~saint@gmail.com] for enabling this. Now we can throw things at Jenkins specials without needing to patch trunk, via patches on precommit builds. was (Author: apurtell): By the way thanks [~saint@gmail.com] for enabling this. Now we can throw things a Jenkins specials without needing to patch trunk, via patches on precommit builds. TestLocalHBaseCluster failing on ubuntu2 Key: HBASE-7594 URL: https://issues.apache.org/jira/browse/HBASE-7594 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.96.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7594-1.patch, 7594-2.patch, 7594-3.patch, 7594-4.patch {noformat} java.io.IOException: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:612) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:533) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4092) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4042) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:427) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:130) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:202) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:450) at org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:215) at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3060) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:585) at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:583) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:607) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:615) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.init(HFileReaderV2.java:115) at org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:564) at org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:599) at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.init(StoreFile.java:1294) at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:525) at org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:628) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:426) at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:422) ... 8 more Caused by: java.lang.InstantiationException: org.apache.hadoop.io.RawComparator at java.lang.Class.newInstance0(Class.java:340) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.createComparator(FixedFileTrailer.java:605) ... 17 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7588: --- Status: Patch Available (was: Reopened) {quote}Is any sort of compare really used on WakeupFlushThread? {quote} I did not find anywhere where this is used... {quote}Go ahead with second style.{quote} Perfect, it's done. {quote}The imports added,{quote} Removed... {quote}and the blank line w/spaces before toString, seem unnecessary.{quote} Removed too. Also removed some which was already there. Updated patch attached. Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7588: --- Attachment: HBASE-7588-v1-trunk.patch Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7588: --- Status: Open (was: Patch Available) Misses one thing :( Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7588: --- Attachment: HBASE-7588-v2-trunk.patch Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch, HBASE-7588-v2-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7588: --- Status: Patch Available (was: Open) Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch, HBASE-7588-v2-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7571) add the notion of per-table or per-column family configuration
[ https://issues.apache.org/jira/browse/HBASE-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559983#comment-13559983 ] Hadoop QA commented on HBASE-7571: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565990/HBASE-7571-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 3 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestLocalHBaseCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4129//console This message is automatically generated. add the notion of per-table or per-column family configuration -- Key: HBASE-7571 URL: https://issues.apache.org/jira/browse/HBASE-7571 Project: HBase Issue Type: Sub-task Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-7571-v0-based-on-HBASE-7563.patch, HBASE-7571-v0-including-HBASE-7563.patch, HBASE-7571-v1.patch, HBASE-7571-v2.patch Main part of split HBASE-7236. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time
[ https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7503: --- Status: Open (was: Patch Available) Add exists(List) in HTableInterface to allow multiple parallel exists at one time - Key: HBASE-7503 URL: https://issues.apache.org/jira/browse/HBASE-7503 Project: HBase Issue Type: Improvement Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, HBASE-7503-v11-trunk.patch, HBASE-7503-v1-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch Original Estimate: 5m Remaining Estimate: 5m We need to have a Boolean[] exists(ListGet gets) throws IOException method implemented in HTableInterface. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559990#comment-13559990 ] Sergey Shelukhin commented on HBASE-7588: - +1 on latest patch Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch, HBASE-7588-v2-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6466) Enable multi-thread for memstore flush
[ https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559989#comment-13559989 ] Hadoop QA commented on HBASE-6466: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12565995/HBASE-6466-v5.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.wal.TestHLog Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4131//console This message is automatically generated. Enable multi-thread for memstore flush -- Key: HBASE-6466 URL: https://issues.apache.org/jira/browse/HBASE-6466 Project: HBase Issue Type: Improvement Components: regionserver Affects Versions: 0.96.0 Reporter: chunhui shen Assignee: chunhui shen Priority: Critical Fix For: 0.96.0 Attachments: HBASE-6466.patch, HBASE-6466v2.patch, HBASE-6466v3.1.patch, HBASE-6466v3.patch, HBASE-6466-v4.patch, HBASE-6466-v4.patch, HBASE-6466-v5.patch If the KV is large or Hlog is closed with high-pressure putting, we found memstore is often above the high water mark and block the putting. So should we enable multi-thread for Memstore Flush? Some performance test data for reference, 1.test environment : random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 regions per regionserver;row len=50 bytes, value len=1024 bytes;5 regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler per client for writing 2.test results: one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per regionserver, appears many aboveGlobalMemstoreLimit blocking two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per regionserver, 200 thread handler per client two cacheFlush handlers, tps:16.1k/s per regionserver, Flush:18.6MB/s per regionserver -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-7588: - Resolution: Fixed Status: Resolved (was: Patch Available) Resolved. Thanks for the patch v2 JM (and lads for the reviews) Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch, HBASE-7588-v2-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2611) Handle RS that fails while processing the failure of another one
[ https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560005#comment-13560005 ] Himanshu Vashishtha commented on HBASE-2611: Thanks for the review Lars :), and Ted for updating the patch. Handle RS that fails while processing the failure of another one Key: HBASE-2611 URL: https://issues.apache.org/jira/browse/HBASE-2611 Project: HBase Issue Type: Sub-task Components: Replication Reporter: Jean-Daniel Cryans Assignee: Himanshu Vashishtha Fix For: 0.96.0, 0.94.5 Attachments: 2611-v3.patch, HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch HBASE-2223 doesn't manage region servers that fail while doing the transfer of HLogs queues from other region servers that failed. Devise a reliable way to do it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7643) HFileArchiver.resolveAndArchive() race condition and snapshot data loss
[ https://issues.apache.org/jira/browse/HBASE-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560011#comment-13560011 ] Matteo Bertozzi commented on HBASE-7643: Another thing that Jon pointed out is, instead of deleting the file (which causes data loss) we can just avoid to move it. It will not cause problems, but just a slow down due to the extra file to look at. HFileArchiver.resolveAndArchive() race condition and snapshot data loss --- Key: HBASE-7643 URL: https://issues.apache.org/jira/browse/HBASE-7643 Project: HBase Issue Type: Bug Affects Versions: hbase-6055, 0.96.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Priority: Blocker Fix For: 0.96.0, 0.94.5 Attachments: HBASE-7653-p4-v0.patch * The master have an hfile cleaner thread (that is responsible for cleaning the /hbase/.archive dir) ** /hbase/.archive/table/region/family/hfile ** if the family/region/family directory is empty the cleaner removes it * The master can archive files (from another thread, e.g. DeleteTableHandler) * The region can archive files (from another server/process, e.g. compaction) The simplified file archiving code looks like this: {code} HFileArchiver.resolveAndArchive(...) { // ensure that the archive dir exists fs.mkdir(archiveDir); // move the file to the archiver success = fs.rename(originalPath/fileName, archiveDir/fileName) // if the rename is failed, delete the file without archiving if (!success) fs.delete(originalPath/fileName); } {code} Since there's no synchronization between HFileArchiver.resolveAndArchive() and the cleaner run (different process, thread, ...) you can end up in the situation where you are moving something in a directory that doesn't exists. {code} fs.mkdir(archiveDir); // HFileCleaner chore starts at this point // and the archiveDirectory that we just ensured to be present gets removed. // The rename at this point will fail since the parent directory is missing. success = fs.rename(originalPath/fileName, archiveDir/fileName) {code} The bad thing of deleting the file without archiving is that if you've a snapshot that relies on the file to be present, or you've a clone table that relies on that file is that you're losing data. Possible solutions * Create a ZooKeeper lock, to notify the master (Hey I'm archiving something, wait a bit) * Add a RS - Master call to let the master removes files and avoid this kind of situations * Avoid to remove empty directories from the archive if the table exists or is not disabled * Add a try catch around the fs.rename The last one, the easiest one, looks like: {code} for (int i = 0; i retries; ++i) { // ensure archive directory to be present fs.mkdir(archiveDir); // possible race - // try to archive file success = fs.rename(originalPath/fileName, archiveDir/fileName); if (success) break; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Baugher updated HBASE-3996: - Attachment: 3996-v14.txt Fixed findbug error in MultiTableInputFormatBase Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.96.0, 0.94.5 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v13.txt, 3996-v14.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2611) Handle RS that fails while processing the failure of another one
[ https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560036#comment-13560036 ] Ted Yu commented on HBASE-2611: --- The trunk patch depends on HBASE-7382 @Himanshu: Can you run the tests listed @ 28/Jun/12 04:07 ? Handle RS that fails while processing the failure of another one Key: HBASE-2611 URL: https://issues.apache.org/jira/browse/HBASE-2611 Project: HBase Issue Type: Sub-task Components: Replication Reporter: Jean-Daniel Cryans Assignee: Himanshu Vashishtha Fix For: 0.96.0, 0.94.5 Attachments: 2611-v3.patch, HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch HBASE-2223 doesn't manage region servers that fail while doing the transfer of HLogs queues from other region servers that failed. Devise a reliable way to do it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4802) Disable show table metrics in bulk loader
[ https://issues.apache.org/jira/browse/HBASE-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560037#comment-13560037 ] Hudson commented on HBASE-4802: --- Integrated in HBase-0.94 #751 (See [https://builds.apache.org/job/HBase-0.94/751/]) HBASE-7644 Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 (Revision 1437091) Result = SUCCESS tedyu : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java Disable show table metrics in bulk loader - Key: HBASE-4802 URL: https://issues.apache.org/jira/browse/HBASE-4802 Project: HBase Issue Type: Bug Reporter: Nicolas Spiegelberg Assignee: Liyin Tang Priority: Trivial Fix For: 0.96.0 Attachments: HBASE-4802.patch During bulk load, the Configuration object may be set to null. This caused an NPE in per-CF metrics because it consults the Configuration to determine whether to show the Table name. Need to add simple change to allow the conf to be null not specify table name in that instance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560038#comment-13560038 ] Hudson commented on HBASE-7646: --- Integrated in HBase-0.94 #751 (See [https://builds.apache.org/job/HBase-0.94/751/]) HBASE-7646 Make forkedProcessTimeoutInSeconds configurable (Revision 1437132) Result = SUCCESS jxiang : Files : * /hbase/branches/0.94/pom.xml Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Fix For: 0.96.0, 0.94.5 Attachments: 0.94-7646.patch, trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7644) Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560039#comment-13560039 ] Hudson commented on HBASE-7644: --- Integrated in HBase-0.94 #751 (See [https://builds.apache.org/job/HBase-0.94/751/]) HBASE-7644 Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 (Revision 1437091) Result = SUCCESS tedyu : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaConfigured.java * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/SchemaMetrics.java Port HBASE-4802 'Disable show table metrics in bulk loader' to 0.94 --- Key: HBASE-7644 URL: https://issues.apache.org/jira/browse/HBASE-7644 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Ted Yu Fix For: 0.94.5 Attachments: 7644-disable-show-table.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560043#comment-13560043 ] Hadoop QA commented on HBASE-7588: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12566005/HBASE-7588-v2-trunk.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestLocalHBaseCluster Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4132//console This message is automatically generated. Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch, HBASE-7588-v2-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time
[ https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7503: --- Attachment: HBASE-7503-v12-trunk.patch Add exists(List) in HTableInterface to allow multiple parallel exists at one time - Key: HBASE-7503 URL: https://issues.apache.org/jira/browse/HBASE-7503 Project: HBase Issue Type: Improvement Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, HBASE-7503-v1-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch Original Estimate: 5m Remaining Estimate: 5m We need to have a Boolean[] exists(ListGet gets) throws IOException method implemented in HTableInterface. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7503) Add exists(List) in HTableInterface to allow multiple parallel exists at one time
[ https://issues.apache.org/jira/browse/HBASE-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-7503: --- Status: Patch Available (was: Open) Hi Sergey, Regarding closestRowBefore, you're totaly right. It's not used and not 100% required. I have put it in because I took what was on GetRequest and kept the same structure. The idea behind that is that even if it's not used in this patch, it might still be useful in the futur. So I'm really open to remove it as to keep it. {quote}assuming a working patch{quote} So far it seems to be working fine. I have addedd few test cases and all passed. I'm a bit out of idea for the test cases, so if anything can be added, just let me know and I will be happŷ to add it. I think the more test cases we have now, the less issues might be introduced in the futur. The patch is not compiling anymore because of the lock beeing removed. So attached is an updated version which compile in te trunk. I have also added on get with a null parameter in the test suite. Add exists(List) in HTableInterface to allow multiple parallel exists at one time - Key: HBASE-7503 URL: https://issues.apache.org/jira/browse/HBASE-7503 Project: HBase Issue Type: Improvement Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7503-v0-trunk.patch, HBASE-7503-v10-trunk.patch, HBASE-7503-v11-trunk.patch, HBASE-7503-v12-trunk.patch, HBASE-7503-v1-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v2-trunk.patch, HBASE-7503-v3-trunk.patch, HBASE-7503-v4-trunk.patch, HBASE-7503-v5-trunk.patch, HBASE-7503-v7-trunk.patch, HBASE-7503-v8-trunk.patch, HBASE-7503-v9-trunk.patch Original Estimate: 5m Remaining Estimate: 5m We need to have a Boolean[] exists(ListGet gets) throws IOException method implemented in HTableInterface. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7519) Support level compaction
[ https://issues.apache.org/jira/browse/HBASE-7519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7519: --- Assignee: Sergey Shelukhin (was: Jimmy Xiang) Support level compaction Key: HBASE-7519 URL: https://issues.apache.org/jira/browse/HBASE-7519 Project: HBase Issue Type: New Feature Components: Compaction Reporter: Jimmy Xiang Assignee: Sergey Shelukhin Attachments: level-compaction.pdf, level-compactions-notes.txt, level-compactions-notes.txt The level compaction algorithm may help HBase for some use cases, for example, read heavy loads (especially, just one version is used), relative small key space updated frequently. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7516) Make compaction policy pluggable
[ https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7516: --- Status: Open (was: Patch Available) Make compaction policy pluggable Key: HBASE-7516 URL: https://issues.apache.org/jira/browse/HBASE-7516 Project: HBase Issue Type: Improvement Reporter: Jimmy Xiang Assignee: Jimmy Xiang Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, HBASE-7516-v2.patch, trunk-7516.patch Currently, the compaction selection is pluggable. It will be great to make the compaction algorithm pluggable too so that we can implement and play with other compaction algorithms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7516) Make compaction policy pluggable
[ https://issues.apache.org/jira/browse/HBASE-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-7516: --- Assignee: Jimmy Xiang (was: Sergey Shelukhin) Make compaction policy pluggable Key: HBASE-7516 URL: https://issues.apache.org/jira/browse/HBASE-7516 Project: HBase Issue Type: Improvement Reporter: Jimmy Xiang Assignee: Jimmy Xiang Attachments: HBASE-7516-v0.patch, HBASE-7516-v1.patch, HBASE-7516-v2.patch, trunk-7516.patch Currently, the compaction selection is pluggable. It will be great to make the compaction algorithm pluggable too so that we can implement and play with other compaction algorithms. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4755) HBase based block placement in DFS
[ https://issues.apache.org/jira/browse/HBASE-4755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560081#comment-13560081 ] Devaraj Das commented on HBASE-4755: HDFS-2576 is the hdfs jira that should be addressed. I have started digging in in this area with an aim to get this feature in hbase trunk (with the corresponding hdfs changes). HBase based block placement in DFS -- Key: HBASE-4755 URL: https://issues.apache.org/jira/browse/HBASE-4755 Project: HBase Issue Type: New Feature Affects Versions: 0.94.0 Reporter: Karthik Ranganathan Assignee: Christopher Gist The feature as is only useful for HBase clusters that care about data locality on regionservers, but this feature can also enable a lot of nice features down the road. The basic idea is as follows: instead of letting HDFS determine where to replicate data (r=3) by place blocks on various regions, it is better to let HBase do so by providing hints to HDFS through the DFS client. That way instead of replicating data at a blocks level, we can replicate data at a per-region level (each region owned by a promary, a secondary and a tertiary regionserver). This is better for 2 things: - Can make region failover faster on clusters which benefit from data affinity - On large clusters with random block placement policy, this helps reduce the probability of data loss The algo is as follows: - Each region in META will have 3 columns which are the preferred regionservers for that region (primary, secondary and tertiary) - Preferred assignment can be controlled by a config knob - Upon cluster start, HMaster will enter a mapping from each region to 3 regionservers (random hash, could use current locality, etc) - The load balancer would assign out regions preferring region assignments to primary over secondary over tertiary over any other node - Periodically (say weekly, configurable) the HMaster would run a locality checked and make sure the map it has for region to regionservers is optimal. Down the road, this can be enhanced to control region placement in the following cases: - Mixed hardware SKU where some regionservers can hold fewer regions - Load balancing across tables where we dont want multiple regions of a table to get assigned to the same regionservers - Multi-tenancy, where we can restrict the assignment of the regions of some table to a subset of regionservers, so an abusive app cannot take down the whole HBase cluster. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560093#comment-13560093 ] Hadoop QA commented on HBASE-3996: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12566013/3996-v14.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/4133//console This message is automatically generated. Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Bryan Baugher Priority: Critical Fix For: 0.96.0, 0.94.5 Attachments: 3996-v10.txt, 3996-v11.txt, 3996-v12.txt, 3996-v13.txt, 3996-v14.txt, 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, 3996-v8.txt, 3996-v9.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7646) Make forkedProcessTimeoutInSeconds configurable
[ https://issues.apache.org/jira/browse/HBASE-7646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560095#comment-13560095 ] Hudson commented on HBASE-7646: --- Integrated in HBase-TRUNK #3777 (See [https://builds.apache.org/job/HBase-TRUNK/3777/]) HBASE-7646 Make forkedProcessTimeoutInSeconds configurable (Revision 1437130) Result = FAILURE jxiang : Files : * /hbase/trunk/pom.xml Make forkedProcessTimeoutInSeconds configurable --- Key: HBASE-7646 URL: https://issues.apache.org/jira/browse/HBASE-7646 Project: HBase Issue Type: Bug Components: build Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Trivial Fix For: 0.96.0, 0.94.5 Attachments: 0.94-7646.patch, trunk-7646.patch Command line property surefire.timeout somehow doesn't work. It may be because forkedProcessTimeoutInSeconds is hard-coded to 900. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7588) Fix two findbugs warning in MemStoreFlusher
[ https://issues.apache.org/jira/browse/HBASE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560096#comment-13560096 ] Hudson commented on HBASE-7588: --- Integrated in HBase-TRUNK #3777 (See [https://builds.apache.org/job/HBase-TRUNK/3777/]) HBASE-7588 Fix two findbugs warning in MemStoreFlusher; REAPPLIED (Revision 1437154) HBASE-7588 Fix two findbugs warning in MemStoreFlusher; REVERTED (Revision 1437121) HBASE-7588 Fix two findbugs warning in MemStoreFlusher (Revision 1437119) Result = FAILURE stack : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java stack : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java stack : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java Fix two findbugs warning in MemStoreFlusher --- Key: HBASE-7588 URL: https://issues.apache.org/jira/browse/HBASE-7588 Project: HBase Issue Type: Bug Reporter: Jean-Marc Spaggiari Assignee: Jean-Marc Spaggiari Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7588-v0-trunk.patch, HBASE-7588-v1-trunk.patch, HBASE-7588-v2-trunk.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7642) HBase shell cannot set Compression
[ https://issues.apache.org/jira/browse/HBASE-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560097#comment-13560097 ] Hudson commented on HBASE-7642: --- Integrated in HBase-TRUNK #3777 (See [https://builds.apache.org/job/HBase-TRUNK/3777/]) HBASE-7642. HBase shell cannot set Compression (Revision 1437099) Result = FAILURE enis : Files : * /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb HBase shell cannot set Compression -- Key: HBASE-7642 URL: https://issues.apache.org/jira/browse/HBASE-7642 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.96.0 Attachments: hbase-7642_v1.patch HBASE-7063 changed the package name for Compression class, but failed to update admin.rb for shell. {code} hbase(main):005:0 alter 'cluster_test', {NAME='test_cf', DATA_BLOCK_ENCODING='FAST_DIFF', COMPRESSION='GZ'} ERROR: cannot load Java class org.apache.hadoop.hbase.io.hfile.Compression {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7546) Obtain a table read lock on region split operations
[ https://issues.apache.org/jira/browse/HBASE-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-7546: - Attachment: hbase-7546_v1.patch Attaching candidate patch. This depends on HBASE-7305. Obtain a table read lock on region split operations --- Key: HBASE-7546 URL: https://issues.apache.org/jira/browse/HBASE-7546 Project: HBase Issue Type: Bug Reporter: Enis Soztutar Assignee: Enis Soztutar Attachments: hbase-7546_v1.patch As discussed in the parent issue HBASE-7305, we should be coordinating between splits and table operations to ensure that they don't happen at the same time. In this issue we will acquire shared read locks for region splits. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7559) Add additional Snapshots Unit Test Coverage
[ https://issues.apache.org/jira/browse/HBASE-7559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560137#comment-13560137 ] Matteo Bertozzi commented on HBASE-7559: +1 on v8 Add additional Snapshots Unit Test Coverage --- Key: HBASE-7559 URL: https://issues.apache.org/jira/browse/HBASE-7559 Project: HBase Issue Type: Sub-task Components: test Affects Versions: 0.96.0 Reporter: Aleksandr Shulman Assignee: Aleksandr Shulman Fix For: 0.96.0 Attachments: 7559-v7.txt, 7559-v8.txt, aleks-snapshots.patch Add additional testing for Snapshots. In particular, we should add tests to verify that operations on cloned tables do not affect the original (and vice versa). Also, we should do testing on table describes before and after snapshot/restore operations. Finally, we should add testing for the HBase shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7647) 0.94 hfiles v2.1 are not backwards compatible with HFilev2.0
Elliott Clark created HBASE-7647: Summary: 0.94 hfiles v2.1 are not backwards compatible with HFilev2.0 Key: HBASE-7647 URL: https://issues.apache.org/jira/browse/HBASE-7647 Project: HBase Issue Type: Bug Components: HFile Affects Versions: 0.94.4 Reporter: Elliott Clark Assignee: Elliott Clark When doing a rolling re-start from 0.92.x to 0.94.x any hfiles written by 0.94 are incompatibile with any of the 0.92 region servers. This is caused by the checksums being put into 0.94. * a minor version was added * checksums were put into the block * checksum meta data was added to block headers. I propose that since these changes are only needed if using hbase.regionserver.checksum.verify, they should be turned off if that option is turned off. Doing so will allow rolling upgrades to go smoother. If a user wants to go from a 0.92 cluster to a 0.94 cluster with hbase.regionserver.checksum.verify they can: * Roll out 0.94 * Change hbase-site.xml * roll restart the region servers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-2611) Handle RS that fails while processing the failure of another one
[ https://issues.apache.org/jira/browse/HBASE-2611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-2611: - Priority: Critical (was: Major) Let's make critical so it gets in. Handle RS that fails while processing the failure of another one Key: HBASE-2611 URL: https://issues.apache.org/jira/browse/HBASE-2611 Project: HBase Issue Type: Sub-task Components: Replication Reporter: Jean-Daniel Cryans Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.96.0, 0.94.5 Attachments: 2611-v3.patch, HBase-2611-upstream-v1.patch, HBASE-2611-v2.patch HBASE-2223 doesn't manage region servers that fail while doing the transfer of HLogs queues from other region servers that failed. Devise a reliable way to do it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7382) Port ZK.multi support from HBASE-6775 to 0.96
[ https://issues.apache.org/jira/browse/HBASE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-7382: - Priority: Critical (was: Major) Making critical, because this is need to for correct functioning of replication. Port ZK.multi support from HBASE-6775 to 0.96 - Key: HBASE-7382 URL: https://issues.apache.org/jira/browse/HBASE-7382 Project: HBase Issue Type: Bug Components: Zookeeper Reporter: Gregory Chanan Priority: Critical Fix For: 0.96.0 HBASE-6775 adds support for ZK.multi ZKUtil and uses it for the 0.92/0.94 compatibility fix implemented in HBASE-6710. ZK.multi support is most likely useful in 0.96, but since HBASE-6710 is not relevant for 0.96, perhaps we should find another use case first before we port. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7382) Port ZK.multi support from HBASE-6775 to 0.96
[ https://issues.apache.org/jira/browse/HBASE-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13560152#comment-13560152 ] Lars Hofhansl commented on HBASE-7382: -- I thought we had said that 0.96 will only support Hadoop 1+ and ZK 3.4+. No big deal if we don't, just that we can only get away with forcing such an upgrade once and 0.96 seems like the logical place to do that. Eventually I would guess that we use multi in multiple (no pun) places, and at that point we'd need to force an upgrade or code all logic twice. Port ZK.multi support from HBASE-6775 to 0.96 - Key: HBASE-7382 URL: https://issues.apache.org/jira/browse/HBASE-7382 Project: HBase Issue Type: Bug Components: Zookeeper Reporter: Gregory Chanan Priority: Critical Fix For: 0.96.0 HBASE-6775 adds support for ZK.multi ZKUtil and uses it for the 0.92/0.94 compatibility fix implemented in HBASE-6710. ZK.multi support is most likely useful in 0.96, but since HBASE-6710 is not relevant for 0.96, perhaps we should find another use case first before we port. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira