[jira] [Commented] (HBASE-8640) ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address
[ https://issues.apache.org/jira/browse/HBASE-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671186#comment-13671186 ] Hadoop QA commented on HBASE-8640: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585552/HBASE-8640.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5897//console This message is automatically generated. ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address --- Key: HBASE-8640 URL: https://issues.apache.org/jira/browse/HBASE-8640 Project: HBase Issue Type: Bug Components: master Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0, 0.95.2, 0.94.9 Attachments: HBASE-8640.patch We are starting rpc server with default interface hostname or configured ipc address {code} this.rpcServer = HBaseRPC.getServer(this, new Class?[]{HMasterInterface.class, HMasterRegionInterface.class}, initialIsa.getHostName(), // This is bindAddress if set else it's hostname initialIsa.getPort(), numHandlers, 0, // we dont use high priority handlers in master conf.getBoolean(hbase.rpc.verbose, false), conf, 0); // this is a DNC w/o high priority handlers {code} But we are initialzing servername with default hostname always master znode also have this hostname. {code} String hostname = Strings.domainNamePointerToHostName(DNS.getDefaultHost( conf.get(hbase.master.dns.interface, default), conf.get(hbase.master.dns.nameserver, default))); ... this.serverName = new ServerName(hostname, this.isa.getPort(), System.currentTimeMillis()); {code} If both default interface hostname and configured ipc address are not same clients will get MasterNotRunningException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8534) fix coverage org.apache.hadoop.hbase.mapreduce
[ https://issues.apache.org/jira/browse/HBASE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Gorshkov updated HBASE-8534: Attachment: HBASE-8534-trunk-g.patch HBASE-8534-0.94-g.patch fix coverage org.apache.hadoop.hbase.mapreduce -- Key: HBASE-8534 URL: https://issues.apache.org/jira/browse/HBASE-8534 Project: HBase Issue Type: Test Affects Versions: 0.94.8, 0.95.2 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: HBASE-8534-0.94-d.patch, HBASE-8534-0.94-e.patch, HBASE-8534-0.94-f.patch, HBASE-8534-0.94-g.patch, HBASE-8534-0.94.patch, HBASE-8534-trunk-a.patch, HBASE-8534-trunk-b.patch, HBASE-8534-trunk-c.patch, HBASE-8534-trunk-d.patch, HBASE-8534-trunk-e.patch, HBASE-8534-trunk-f.patch, HBASE-8534-trunk-g.patch, HBASE-8534-trunk.patch fix coverage org.apache.hadoop.hbase.mapreduce patch HBASE-8534-0.94.patch for branch-0.94 patch HBASE-8534-trunk.patch for branch-0.95 and trunk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8534) fix coverage org.apache.hadoop.hbase.mapreduce
[ https://issues.apache.org/jira/browse/HBASE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671245#comment-13671245 ] Aleksey Gorshkov commented on HBASE-8534: - Ok . LauncherSecurityManager was restored and changed. new patches: patch HBASE-8534-0.94-g.patch for branch-0.94 patch HBASE-8534-trunk-g.patch for branch-0.95 and trunk. Vadim will implement new LauncherSecurityManager into HBASE-8611 fix coverage org.apache.hadoop.hbase.mapreduce -- Key: HBASE-8534 URL: https://issues.apache.org/jira/browse/HBASE-8534 Project: HBase Issue Type: Test Affects Versions: 0.94.8, 0.95.2 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: HBASE-8534-0.94-d.patch, HBASE-8534-0.94-e.patch, HBASE-8534-0.94-f.patch, HBASE-8534-0.94-g.patch, HBASE-8534-0.94.patch, HBASE-8534-trunk-a.patch, HBASE-8534-trunk-b.patch, HBASE-8534-trunk-c.patch, HBASE-8534-trunk-d.patch, HBASE-8534-trunk-e.patch, HBASE-8534-trunk-f.patch, HBASE-8534-trunk-g.patch, HBASE-8534-trunk.patch fix coverage org.apache.hadoop.hbase.mapreduce patch HBASE-8534-0.94.patch for branch-0.94 patch HBASE-8534-trunk.patch for branch-0.95 and trunk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8346) Prefetching .META. rows in case only when useCache is set to true
[ https://issues.apache.org/jira/browse/HBASE-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671264#comment-13671264 ] Hudson commented on HBASE-8346: --- Integrated in HBase-0.94 #1001 (See [https://builds.apache.org/job/HBase-0.94/1001/]) HBASE-8655 Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true) (Himanshu and Anoop) (Revision 1488034) Result = FAILURE larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java Prefetching .META. rows in case only when useCache is set to true - Key: HBASE-8346 URL: https://issues.apache.org/jira/browse/HBASE-8346 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.95.0 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Minor Fix For: 0.98.0, 0.95.1 Attachments: HBase-8346-v1.patch, HBase-8346-v2.patch, HBase-8346-v3.patch While doing a .META. lookup (HCM#locateRegionInMeta), we also prefetch some other region's info for that table. The usual call to the meta lookup has useCache variable set to true. Currently, it calls preFetch irrespective of the value useCache flag: {code} if (Bytes.equals(parentTable, HConstants.META_TABLE_NAME) (getRegionCachePrefetch(tableName))) { prefetchRegionCache(tableName, row); } {code} Later on, if useCache flag is set to false, it deletes the entry for that row from the cache with a forceDeleteCachedLocation() call. This always results in two calls to the .META. table in this case. The useCache variable is set to false in case we are retrying to find a region (regionserver failover). It can be verified from the log statements of a client while having a regionserver failover. In the below example, the client was connected to a1217, when a1217 got killed. The region in question is moved to a1215. Client got this info from META scan, where as client cache this info from META, but then delete it from cache as it want the latest info. The result is even the meta provides the latest info, it is still deleted This causes even the latest info to be deleted. Thus, client deletes a1215.abc.com even though it is correct info. {code} 13/04/15 09:49:12 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is a1217.abc.com:40020 13/04/15 09:49:12 WARN client.ServerCallable: Received exception, tries=1, numRetries=30 message=Connection refused 13/04/15 09:49:12 DEBUG client.HConnectionManager$HConnectionImplementation: Removed all cached region locations that map to a1217.abc.com,40020,1365621947381 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = {NAME = 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', STARTKEY = 'user7225973201630273569', ENDKEY = '', ENCODED = 40382355b8c45e1338d620c018f8ff6c,} 13/04/15 09:49:13 DEBUG client.MetaScanner: Scanning .META. starting at row=t,user7225973201630273569,00 for max=10 rows using hconnection-0x7786df0f 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = {NAME = 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', STARTKEY = 'user7225973201630273569', ENDKEY = '', ENCODED = 40382355b8c45e1338d620c018f8ff6c,} 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is a1215.abc.com:40020 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: Removed a1215.abc.com:40020 as a location of t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. for tableName=t from cache 13/04/15 09:49:13 DEBUG client.MetaScanner: Current INFO from scan results = {NAME = 't,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c.', STARTKEY = 'user7225973201630273569', ENDKEY = '', ENCODED = 40382355b8c45e1338d620c018f8ff6c,} 13/04/15 09:49:13 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for t,user7225973201630273569,1365536809331.40382355b8c45e1338d620c018f8ff6c. is a1215.abc.com:40020 13/04/15 09:49:13 WARN client.ServerCallable: Received exception, tries=2, numRetries=30 message=org.apache.hadoop.hbase.exceptions.UnknownScannerException: Name: -6313340536390503703, already closed? 13/04/15 09:49:13 DEBUG client.ClientScanner: Advancing internal scanner to startKey at 'user760712450403198900' {code} -- This message
[jira] [Commented] (HBASE-8655) Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true)
[ https://issues.apache.org/jira/browse/HBASE-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671265#comment-13671265 ] Hudson commented on HBASE-8655: --- Integrated in HBase-0.94 #1001 (See [https://builds.apache.org/job/HBase-0.94/1001/]) HBASE-8655 Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true) (Himanshu and Anoop) (Revision 1488034) Result = FAILURE larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true) --- Key: HBASE-8655 URL: https://issues.apache.org/jira/browse/HBASE-8655 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.94.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.94.9 Attachments: HBASE-8655.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8638) add logging to compaction policy
[ https://issues.apache.org/jira/browse/HBASE-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671291#comment-13671291 ] Hudson commented on HBASE-8638: --- Integrated in hbase-0.95-on-hadoop2 #118 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/118/]) HBASE-8638 add logging to compaction policy (Revision 1487949) Result = FAILURE sershe : Files : * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/RatioBasedCompactionPolicy.java add logging to compaction policy Key: HBASE-8638 URL: https://issues.apache.org/jira/browse/HBASE-8638 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Trivial Attachments: HBASE-8638-v0.patch We are seeing some strange patterns with current compaction policy in some contexts (with normal writes, no bulk load). It seems like some logging is needed to understand what is going on, similar to old default policy -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7244) Provide a command or argument to startup, that formats znodes if provided
[ https://issues.apache.org/jira/browse/HBASE-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671295#comment-13671295 ] Hudson commented on HBASE-7244: --- Integrated in hbase-0.95-on-hadoop2 #118 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/118/]) HBASE-7244 Provide a command or argument to startup, that formats znodes if provided; FORGOT TO SVN ADD bin/hbase-cleanup.sh (Revision 1487937) Result = FAILURE stack : Files : * /hbase/branches/0.95/bin/hbase-cleanup.sh Provide a command or argument to startup, that formats znodes if provided - Key: HBASE-7244 URL: https://issues.apache.org/jira/browse/HBASE-7244 Project: HBase Issue Type: New Feature Components: Zookeeper Affects Versions: 0.94.0 Reporter: Harsh J Assignee: rajeshbabu Priority: Critical Fix For: 0.98.0, 0.95.1 Attachments: HBASE-7244_2.patch, HBASE-7244_3.patch, HBASE-7244_4.patch, HBASE-7244_5.patch, HBASE-7244_6.patch, HBASE-7244_7.patch, HBASE-7244.patch Many a times I've had to, and have seen instructions being thrown, to stop cluster, clear out ZK and restart. While this is only a quick (and painful to master) fix, it is certainly nifty to some smaller cluster users but the process is far too long, roughly: 1. Stop HBase 2. Start zkCli.sh and connect to the right quorum 3. Find and ensure the HBase parent znode from the configs (/hbase only by default) 4. Run an rmr /hbase in the zkCli.sh shell, or manually delete each znode if on a lower version of ZK. 5. Quit zkCli.sh and start HBase again Perhaps it may be useful, if the start-hbase.sh itself accepted a formatZK parameter. Such that, when you do a {{start-hbase.sh -formatZK}}, it does steps 2-4 automatically for you. For safety, we could make the formatter code ensure that no HBase instance is actually active, and skip the format process if it is. Similar to a HDFS NameNode's format, which would disallow if the name directories are locked. Would this be a useful addition for administrators? Bigtop too can provide a service subcommand that could do this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8658) hbase clean is deaf to the --config DIR option
[ https://issues.apache.org/jira/browse/HBASE-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671292#comment-13671292 ] Hudson commented on HBASE-8658: --- Integrated in hbase-0.95-on-hadoop2 #118 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/118/]) HBASE-8658 hbase clean is deaf to the --config DIR option (Revision 1488046) Result = FAILURE stack : Files : * /hbase/branches/0.95/bin/hbase * /hbase/branches/0.95/bin/hbase-cleanup.sh hbase clean is deaf to the --config DIR option -- Key: HBASE-8658 URL: https://issues.apache.org/jira/browse/HBASE-8658 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Fix For: 0.98.0, 0.95.1 Attachments: 8658.txt We need this doing migrations. I'd think lots of folks will have their configs other than default location (I need it testing migration) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8659) LruBlockCache logs too much
[ https://issues.apache.org/jira/browse/HBASE-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671296#comment-13671296 ] Hudson commented on HBASE-8659: --- Integrated in hbase-0.95-on-hadoop2 #118 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/118/]) HBASE-8659 LruBlockCache logs too much (Revision 1488054) Result = FAILURE sershe : Files : * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java LruBlockCache logs too much --- Key: HBASE-8659 URL: https://issues.apache.org/jira/browse/HBASE-8659 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-8659-v0.patch LruBlockCache logs too much. {code} grep -c . hbase-hbase-regionserver-.log 77539 grep -c LruBlockCache hbase-hbase-regionserver-..log 64459 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8631) Meta Region First Recovery
[ https://issues.apache.org/jira/browse/HBASE-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671293#comment-13671293 ] Hudson commented on HBASE-8631: --- Integrated in hbase-0.95-on-hadoop2 #118 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/118/]) HBASE-8631 Meta Region First Recovery (Revision 1487940) Result = FAILURE stack : Files : * /hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java * /hbase/branches/0.95/hbase-it/src/test/java/org/apache/hadoop/hbase/IngestIntegrationTestBase.java * /hbase/branches/0.95/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestDataIngestWithChaosMonkey.java * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitLogWorker.java * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogUtil.java * /hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java Meta Region First Recovery -- Key: HBASE-8631 URL: https://issues.apache.org/jira/browse/HBASE-8631 Project: HBase Issue Type: Bug Components: MTTR Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.98.0, 0.95.1 Attachments: hbase-8631.patch, hbase-8631-v2.patch, hbase-8631-v3.patch, hbase-8631-v4.patch We have a separate wal for meta region. While log splitting logic haven't taken the advantage of this and splitlogworker still picks a wal file randomly. Imaging if we have multiple region servers including meta RS fails about the same time while meta wal is recovered last, all failed regions have to wait meta recovered and then can be online again. The open JIRA is to let splitlogworker to pick a meta wal file firstly and then others. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8653) master seems to be deleting region tmp directory from under compaction
[ https://issues.apache.org/jira/browse/HBASE-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671290#comment-13671290 ] Hudson commented on HBASE-8653: --- Integrated in hbase-0.95-on-hadoop2 #118 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/118/]) HBASE-8653 master seems to be deleting region tmp directory from under compaction (Revision 1487962) Result = FAILURE sershe : Files : * /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java master seems to be deleting region tmp directory from under compaction -- Key: HBASE-8653 URL: https://issues.apache.org/jira/browse/HBASE-8653 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Blocker Fix For: 0.95.1 Attachments: 8653-v2.txt, HBASE-8653-v0.patch Putting it in .1, feel free to move to .2. We have observed some compaction errors where the code was creating a new HDFS block, and the file would not exist. Upon investigation, we found the .tmp directory delete request on namenode from master IP shortly before that. There are no specific logs on master, but one thing running at that time was CatalogJanitor. CatalogJanitor calls HRegionFileSystem::openRegionFromFileSystem with readOnly == true (in fact, everyone does); if readOnly is true, HRegionFileSystem nukes the .tmp directory. We didn't go thru details on how it arrived there (or if there may have been other culprit), but it appears that deleting stuff if (readOnly) is not the intended behavior and it should be if (!readOnly). Given that readOnly is not really used (or rather is always true except some inconsequential usage in test) perhaps entire cleanup should be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8657) Miscellaneous log fixups for hbase-it; tidier logging, fix a few NPEs
[ https://issues.apache.org/jira/browse/HBASE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671294#comment-13671294 ] Hudson commented on HBASE-8657: --- Integrated in hbase-0.95-on-hadoop2 #118 (See [https://builds.apache.org/job/hbase-0.95-on-hadoop2/118/]) HBASE-8657 Miscellaneous log fixups for hbase-it; tidier logging, fix a few NPEs (Revision 1487944) Result = FAILURE stack : Files : * /hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java * /hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java * /hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java * /hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.95/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestDataIngestSlowDeterministic.java * /hbase/branches/0.95/hbase-it/src/test/java/org/apache/hadoop/hbase/util/ChaosMonkey.java * /hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java Miscellaneous log fixups for hbase-it; tidier logging, fix a few NPEs - Key: HBASE-8657 URL: https://issues.apache.org/jira/browse/HBASE-8657 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Fix For: 0.98.0, 0.95.1 Attachments: fixup2.txt, fixups.txt This is a miscellaneous set of fixups that come of my staring at hbase-it logs trying to follow what is going on. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8655) Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true)
[ https://issues.apache.org/jira/browse/HBASE-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671299#comment-13671299 ] Hudson commented on HBASE-8655: --- Integrated in HBase-0.94-security #156 (See [https://builds.apache.org/job/HBase-0.94-security/156/]) HBASE-8655 Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true) (Himanshu and Anoop) (Revision 1488034) Result = FAILURE larsh : Files : * /hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java Backport to 94 - HBASE-8346(Prefetching .META. rows in case only when useCache is set to true) --- Key: HBASE-8655 URL: https://issues.apache.org/jira/browse/HBASE-8655 Project: HBase Issue Type: Bug Components: Client Affects Versions: 0.94.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.94.9 Attachments: HBASE-8655.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8638) add logging to compaction policy
[ https://issues.apache.org/jira/browse/HBASE-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671336#comment-13671336 ] Hudson commented on HBASE-8638: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #549 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/549/]) HBASE-8638 add logging to compaction policy (Revision 1487948) Result = FAILURE sershe : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/RatioBasedCompactionPolicy.java add logging to compaction policy Key: HBASE-8638 URL: https://issues.apache.org/jira/browse/HBASE-8638 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Trivial Attachments: HBASE-8638-v0.patch We are seeing some strange patterns with current compaction policy in some contexts (with normal writes, no bulk load). It seems like some logging is needed to understand what is going on, similar to old default policy -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8659) LruBlockCache logs too much
[ https://issues.apache.org/jira/browse/HBASE-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671340#comment-13671340 ] Hudson commented on HBASE-8659: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #549 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/549/]) HBASE-8659 LruBlockCache logs too much (Revision 1488052) Result = FAILURE sershe : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java LruBlockCache logs too much --- Key: HBASE-8659 URL: https://issues.apache.org/jira/browse/HBASE-8659 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-8659-v0.patch LruBlockCache logs too much. {code} grep -c . hbase-hbase-regionserver-.log 77539 grep -c LruBlockCache hbase-hbase-regionserver-..log 64459 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8658) hbase clean is deaf to the --config DIR option
[ https://issues.apache.org/jira/browse/HBASE-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671337#comment-13671337 ] Hudson commented on HBASE-8658: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #549 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/549/]) HBASE-8658 hbase clean is deaf to the --config DIR option (Revision 1488045) Result = FAILURE stack : Files : * /hbase/trunk/bin/hbase * /hbase/trunk/bin/hbase-cleanup.sh hbase clean is deaf to the --config DIR option -- Key: HBASE-8658 URL: https://issues.apache.org/jira/browse/HBASE-8658 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Fix For: 0.98.0, 0.95.1 Attachments: 8658.txt We need this doing migrations. I'd think lots of folks will have their configs other than default location (I need it testing migration) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8657) Miscellaneous log fixups for hbase-it; tidier logging, fix a few NPEs
[ https://issues.apache.org/jira/browse/HBASE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671339#comment-13671339 ] Hudson commented on HBASE-8657: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #549 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/549/]) HBASE-8657 Miscellaneous log fixups for hbase-it; tidier logging, fix a few NPEs (Revision 1487945) Result = FAILURE stack : Files : * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestDataIngestSlowDeterministic.java * /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/util/ChaosMonkey.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedWriter.java Miscellaneous log fixups for hbase-it; tidier logging, fix a few NPEs - Key: HBASE-8657 URL: https://issues.apache.org/jira/browse/HBASE-8657 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Fix For: 0.98.0, 0.95.1 Attachments: fixup2.txt, fixups.txt This is a miscellaneous set of fixups that come of my staring at hbase-it logs trying to follow what is going on. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8653) master seems to be deleting region tmp directory from under compaction
[ https://issues.apache.org/jira/browse/HBASE-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671335#comment-13671335 ] Hudson commented on HBASE-8653: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #549 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/549/]) HBASE-8653 master seems to be deleting region tmp directory from under compaction (Revision 1487961) Result = FAILURE sershe : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java master seems to be deleting region tmp directory from under compaction -- Key: HBASE-8653 URL: https://issues.apache.org/jira/browse/HBASE-8653 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Blocker Fix For: 0.95.1 Attachments: 8653-v2.txt, HBASE-8653-v0.patch Putting it in .1, feel free to move to .2. We have observed some compaction errors where the code was creating a new HDFS block, and the file would not exist. Upon investigation, we found the .tmp directory delete request on namenode from master IP shortly before that. There are no specific logs on master, but one thing running at that time was CatalogJanitor. CatalogJanitor calls HRegionFileSystem::openRegionFromFileSystem with readOnly == true (in fact, everyone does); if readOnly is true, HRegionFileSystem nukes the .tmp directory. We didn't go thru details on how it arrived there (or if there may have been other culprit), but it appears that deleting stuff if (readOnly) is not the intended behavior and it should be if (!readOnly). Given that readOnly is not really used (or rather is always true except some inconsequential usage in test) perhaps entire cleanup should be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8631) Meta Region First Recovery
[ https://issues.apache.org/jira/browse/HBASE-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671338#comment-13671338 ] Hudson commented on HBASE-8631: --- Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #549 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/549/]) HBASE-8631 Meta Region First Recovery (Revision 1487939) Result = FAILURE stack : Files : * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java * /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IngestIntegrationTestBase.java * /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestDataIngestWithChaosMonkey.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitLogWorker.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogUtil.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java Meta Region First Recovery -- Key: HBASE-8631 URL: https://issues.apache.org/jira/browse/HBASE-8631 Project: HBase Issue Type: Bug Components: MTTR Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.98.0, 0.95.1 Attachments: hbase-8631.patch, hbase-8631-v2.patch, hbase-8631-v3.patch, hbase-8631-v4.patch We have a separate wal for meta region. While log splitting logic haven't taken the advantage of this and splitlogworker still picks a wal file randomly. Imaging if we have multiple region servers including meta RS fails about the same time while meta wal is recovered last, all failed regions have to wait meta recovered and then can be online again. The open JIRA is to let splitlogworker to pick a meta wal file firstly and then others. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8659) LruBlockCache logs too much
[ https://issues.apache.org/jira/browse/HBASE-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671359#comment-13671359 ] Hudson commented on HBASE-8659: --- Integrated in HBase-TRUNK #4152 (See [https://builds.apache.org/job/HBase-TRUNK/4152/]) HBASE-8659 LruBlockCache logs too much (Revision 1488052) Result = FAILURE sershe : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java LruBlockCache logs too much --- Key: HBASE-8659 URL: https://issues.apache.org/jira/browse/HBASE-8659 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-8659-v0.patch LruBlockCache logs too much. {code} grep -c . hbase-hbase-regionserver-.log 77539 grep -c LruBlockCache hbase-hbase-regionserver-..log 64459 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8658) hbase clean is deaf to the --config DIR option
[ https://issues.apache.org/jira/browse/HBASE-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671358#comment-13671358 ] Hudson commented on HBASE-8658: --- Integrated in HBase-TRUNK #4152 (See [https://builds.apache.org/job/HBase-TRUNK/4152/]) HBASE-8658 hbase clean is deaf to the --config DIR option (Revision 1488045) Result = FAILURE stack : Files : * /hbase/trunk/bin/hbase * /hbase/trunk/bin/hbase-cleanup.sh hbase clean is deaf to the --config DIR option -- Key: HBASE-8658 URL: https://issues.apache.org/jira/browse/HBASE-8658 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Fix For: 0.98.0, 0.95.1 Attachments: 8658.txt We need this doing migrations. I'd think lots of folks will have their configs other than default location (I need it testing migration) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5110) code enhancement - remove unnecessary if-checks in every loop in HLog class
[ https://issues.apache.org/jira/browse/HBASE-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-5110: - Attachment: HBASE-5110_1.patch code enhancement - remove unnecessary if-checks in every loop in HLog class --- Key: HBASE-5110 URL: https://issues.apache.org/jira/browse/HBASE-5110 Project: HBase Issue Type: Improvement Components: wal Affects Versions: 0.90.1, 0.90.2, 0.90.4, 0.92.0 Reporter: Mikael Sitruk Priority: Minor Attachments: HBASE-5110_1.patch The HLog class (method findMemstoresWithEditsEqualOrOlderThan) has unnecessary if check in a loop. static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = null; for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { if (regions == null) regions = new ArrayListbyte [](); regions.add(e.getKey()); } } return regions == null? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } The following change is suggested static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = new ArrayListbyte [](); for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { regions.add(e.getKey()); } } return regions.size() == 0? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5110) code enhancement - remove unnecessary if-checks in every loop in HLog class
[ https://issues.apache.org/jira/browse/HBASE-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] samar updated HBASE-5110: - Status: Patch Available (was: Open) code enhancement - remove unnecessary if-checks in every loop in HLog class --- Key: HBASE-5110 URL: https://issues.apache.org/jira/browse/HBASE-5110 Project: HBase Issue Type: Improvement Components: wal Affects Versions: 0.92.0, 0.90.4, 0.90.2, 0.90.1 Reporter: Mikael Sitruk Priority: Minor Attachments: HBASE-5110_1.patch The HLog class (method findMemstoresWithEditsEqualOrOlderThan) has unnecessary if check in a loop. static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = null; for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { if (regions == null) regions = new ArrayListbyte [](); regions.add(e.getKey()); } } return regions == null? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } The following change is suggested static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = new ArrayListbyte [](); for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { regions.add(e.getKey()); } } return regions.size() == 0? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5110) code enhancement - remove unnecessary if-checks in every loop in HLog class
[ https://issues.apache.org/jira/browse/HBASE-5110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671422#comment-13671422 ] Hadoop QA commented on HBASE-5110: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585595/HBASE-5110_1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.replication.TestReplicationQueueFailover Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5899//console This message is automatically generated. code enhancement - remove unnecessary if-checks in every loop in HLog class --- Key: HBASE-5110 URL: https://issues.apache.org/jira/browse/HBASE-5110 Project: HBase Issue Type: Improvement Components: wal Affects Versions: 0.90.1, 0.90.2, 0.90.4, 0.92.0 Reporter: Mikael Sitruk Priority: Minor Attachments: HBASE-5110_1.patch The HLog class (method findMemstoresWithEditsEqualOrOlderThan) has unnecessary if check in a loop. static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = null; for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { if (regions == null) regions = new ArrayListbyte [](); regions.add(e.getKey()); } } return regions == null? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } The following change is suggested static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid, final Mapbyte [], Long regionsToSeqids) { // This method is static so it can be unit tested the easier. Listbyte [] regions = new ArrayListbyte [](); for (Map.Entrybyte [], Long e: regionsToSeqids.entrySet()) { if (e.getValue().longValue() = oldestWALseqid) { regions.add(e.getKey()); } } return regions.size() == 0? null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY}); } -- This message
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671467#comment-13671467 ] Jonathan Hsieh commented on HBASE-8642: --- The idea of a delete by table name commands seems reasonable. This might get confusing if we have a rename table functionality but at the moment we don't. [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch Support list and delete snapshot by table name. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671480#comment-13671480 ] Ted Yu commented on HBASE-8642: --- {code} +def delete_snapshots_by_table(table) {code} Can the new commands be made shorter ? [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch Support list and delete snapshot by table name. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8640) ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address
[ https://issues.apache.org/jira/browse/HBASE-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671504#comment-13671504 ] Anoop Sam John commented on HBASE-8640: --- At RS side we do this way already. Looks good ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address --- Key: HBASE-8640 URL: https://issues.apache.org/jira/browse/HBASE-8640 Project: HBase Issue Type: Bug Components: master Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0, 0.95.2, 0.94.9 Attachments: HBASE-8640.patch We are starting rpc server with default interface hostname or configured ipc address {code} this.rpcServer = HBaseRPC.getServer(this, new Class?[]{HMasterInterface.class, HMasterRegionInterface.class}, initialIsa.getHostName(), // This is bindAddress if set else it's hostname initialIsa.getPort(), numHandlers, 0, // we dont use high priority handlers in master conf.getBoolean(hbase.rpc.verbose, false), conf, 0); // this is a DNC w/o high priority handlers {code} But we are initialzing servername with default hostname always master znode also have this hostname. {code} String hostname = Strings.domainNamePointerToHostName(DNS.getDefaultHost( conf.get(hbase.master.dns.interface, default), conf.get(hbase.master.dns.nameserver, default))); ... this.serverName = new ServerName(hostname, this.isa.getPort(), System.currentTimeMillis()); {code} If both default interface hostname and configured ipc address are not same clients will get MasterNotRunningException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8635) Define prefetcher.resultsize.max as percentage
[ https://issues.apache.org/jira/browse/HBASE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671547#comment-13671547 ] Jimmy Xiang commented on HBASE-8635: [~anoop.hbase], what's your suggestion? I think it is reasonable to default to 10%. It is the upper limit, not reserved all the time. Define prefetcher.resultsize.max as percentage -- Key: HBASE-8635 URL: https://issues.apache.org/jira/browse/HBASE-8635 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: Jimmy Xiang Priority: Minor Attachments: trunk-8635.patch Currently hbase.hregionserver.prefetcher.resultsize.max defines global limit for prefetching. The default value is 256MB. It would be more flexible to define this measure as a percentage of the heap. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671591#comment-13671591 ] Matteo Bertozzi commented on HBASE-8642: {quote}this might get confusing if we have a rename table functionality but at the moment we don't.{quote} right... we sort of have the rename table mentioned in the docbook... since we don't have a proper naming system, my guess is that we should just alert the user... if you rename the table snapshots are not updated with the new table name... [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch Support list and delete snapshot by table name. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8534) fix coverage org.apache.hadoop.hbase.mapreduce
[ https://issues.apache.org/jira/browse/HBASE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671598#comment-13671598 ] Nick Dimiduk commented on HBASE-8534: - This one's looking good. My only complaint is the docstring on {{LauncherSecurityManager}} needs updating. +1 from me, pending docstring update and green lights from BuilderBot. [~aleksgor], please cancel and resubmit patch when you have the next version posted, and then hopefully [~nkeywal] or [~yuzhih...@gmail.com] will commit. fix coverage org.apache.hadoop.hbase.mapreduce -- Key: HBASE-8534 URL: https://issues.apache.org/jira/browse/HBASE-8534 Project: HBase Issue Type: Test Affects Versions: 0.94.8, 0.95.2 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: HBASE-8534-0.94-d.patch, HBASE-8534-0.94-e.patch, HBASE-8534-0.94-f.patch, HBASE-8534-0.94-g.patch, HBASE-8534-0.94.patch, HBASE-8534-trunk-a.patch, HBASE-8534-trunk-b.patch, HBASE-8534-trunk-c.patch, HBASE-8534-trunk-d.patch, HBASE-8534-trunk-e.patch, HBASE-8534-trunk-f.patch, HBASE-8534-trunk-g.patch, HBASE-8534-trunk.patch fix coverage org.apache.hadoop.hbase.mapreduce patch HBASE-8534-0.94.patch for branch-0.94 patch HBASE-8534-trunk.patch for branch-0.95 and trunk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8635) Define prefetcher.resultsize.max as percentage
[ https://issues.apache.org/jira/browse/HBASE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671603#comment-13671603 ] Anoop Sam John commented on HBASE-8635: --- 10% is reasonable. Only thing is we need to consider this also in the above scenario. May be we need to reduce the block cache max% to 30? [~stack] what do you? bq.It is the upper limit, not reserved all the time. Ya block cache also upper limit only. My point was what if ,at some point, all these caches taking its max possible? Will have only 10% remaining memory for op of the RS. Will that be enough? Define prefetcher.resultsize.max as percentage -- Key: HBASE-8635 URL: https://issues.apache.org/jira/browse/HBASE-8635 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: Jimmy Xiang Priority: Minor Attachments: trunk-8635.patch Currently hbase.hregionserver.prefetcher.resultsize.max defines global limit for prefetching. The default value is 256MB. It would be more flexible to define this measure as a percentage of the heap. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-8635) Define prefetcher.resultsize.max as percentage
[ https://issues.apache.org/jira/browse/HBASE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671603#comment-13671603 ] Anoop Sam John edited comment on HBASE-8635 at 5/31/13 4:08 PM: 10% is reasonable. Only thing is we need to consider this also in the above scenario. May be we need to reduce the block cache max% to 30? [~stack] what do you say? bq.It is the upper limit, not reserved all the time. Ya block cache also upper limit only. My point was what if ,at some point, all these caches taking its max possible? Will have only 10% remaining memory for op of the RS. Will that be enough? was (Author: anoop.hbase): 10% is reasonable. Only thing is we need to consider this also in the above scenario. May be we need to reduce the block cache max% to 30? [~stack] what do you? bq.It is the upper limit, not reserved all the time. Ya block cache also upper limit only. My point was what if ,at some point, all these caches taking its max possible? Will have only 10% remaining memory for op of the RS. Will that be enough? Define prefetcher.resultsize.max as percentage -- Key: HBASE-8635 URL: https://issues.apache.org/jira/browse/HBASE-8635 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: Jimmy Xiang Priority: Minor Attachments: trunk-8635.patch Currently hbase.hregionserver.prefetcher.resultsize.max defines global limit for prefetching. The default value is 256MB. It would be more flexible to define this measure as a percentage of the heap. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8635) Define prefetcher.resultsize.max as percentage
[ https://issues.apache.org/jira/browse/HBASE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671612#comment-13671612 ] Jimmy Xiang commented on HBASE-8635: I agree. I think we can adjust the block cache upper limit percentage. If more memory is needed, users have to increase the total. Define prefetcher.resultsize.max as percentage -- Key: HBASE-8635 URL: https://issues.apache.org/jira/browse/HBASE-8635 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: Jimmy Xiang Priority: Minor Attachments: trunk-8635.patch Currently hbase.hregionserver.prefetcher.resultsize.max defines global limit for prefetching. The default value is 256MB. It would be more flexible to define this measure as a percentage of the heap. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8635) Define prefetcher.resultsize.max as percentage
[ https://issues.apache.org/jira/browse/HBASE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671616#comment-13671616 ] Anoop Sam John commented on HBASE-8635: --- Yes. Both block cache and prefetch cache associate with scan and user can adjust as per the need. We can reduce the default block cache %. This was recently increased from 25 to 40%. If Stack is fine with this, we can change. Thanks Jimmy Define prefetcher.resultsize.max as percentage -- Key: HBASE-8635 URL: https://issues.apache.org/jira/browse/HBASE-8635 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: Jimmy Xiang Priority: Minor Attachments: trunk-8635.patch Currently hbase.hregionserver.prefetcher.resultsize.max defines global limit for prefetching. The default value is 256MB. It would be more flexible to define this measure as a percentage of the heap. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8534) fix coverage org.apache.hadoop.hbase.mapreduce
[ https://issues.apache.org/jira/browse/HBASE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671636#comment-13671636 ] Anoop Sam John commented on HBASE-8534: --- testIndexBuilder() {code} when(result.getValue(Bytes.toBytes(attributes), Bytes.toBytes(column1))).thenReturn( +Bytes.toBytes(test)); {code} The CF name we have used is columnFamily. This was hard coded to attributes in IndexBuilder which was corrected in HBASE-8641. So the actual code will call result.getValue(columnFamily,...) now. Pls correct in test also. Why I said mini cluster FT is better is even with the above bug present, the test case was not able to find it. :( In some other test cases I can see mini cluster usage now. It is ok to go with mock in testIndexBuilder. If the QA passes, +1 for commit. fix coverage org.apache.hadoop.hbase.mapreduce -- Key: HBASE-8534 URL: https://issues.apache.org/jira/browse/HBASE-8534 Project: HBase Issue Type: Test Affects Versions: 0.94.8, 0.95.2 Reporter: Aleksey Gorshkov Assignee: Aleksey Gorshkov Attachments: HBASE-8534-0.94-d.patch, HBASE-8534-0.94-e.patch, HBASE-8534-0.94-f.patch, HBASE-8534-0.94-g.patch, HBASE-8534-0.94.patch, HBASE-8534-trunk-a.patch, HBASE-8534-trunk-b.patch, HBASE-8534-trunk-c.patch, HBASE-8534-trunk-d.patch, HBASE-8534-trunk-e.patch, HBASE-8534-trunk-f.patch, HBASE-8534-trunk-g.patch, HBASE-8534-trunk.patch fix coverage org.apache.hadoop.hbase.mapreduce patch HBASE-8534-0.94.patch for branch-0.94 patch HBASE-8534-trunk.patch for branch-0.95 and trunk -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8667) Master and Regionserver not able to communicate if both bound to different network interfaces on the same machine.
[ https://issues.apache.org/jira/browse/HBASE-8667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671639#comment-13671639 ] Anoop Sam John commented on HBASE-8667: --- So here server @RS was bound to a different hostname/ip but the master sees RS identity as another. Master looks at connection's remote host name to decide what is the hostname of the RS. HRS {code} if (key.equals(HConstants.KEY_FOR_HOSTNAME_SEEN_BY_MASTER)) { String hostnameFromMasterPOV = e.getValue(); this.serverNameFromMasterPOV = new ServerName(hostnameFromMasterPOV, this.isa.getPort(), this.startcode); if (!this.serverNameFromMasterPOV.equals(this.isa.getHostName())) { LOG.info(Master passed us a different hostname to use; was= + this.isa.getHostName() + , but now= + this.serverNameFromMasterPOV.getHostname()); } continue; } {code} When master taken some other hostname for this RS we just log that in RS side and continue.. So can we pass the hostname which is actually bound with the RS server to Master when it is reporting? And Master uses that to communicate with RS then on? With this change I am able to start the cluster successfully in Rajesh's scenario. If this change sounds fine, I can submit the patch. Master and Regionserver not able to communicate if both bound to different network interfaces on the same machine. -- Key: HBASE-8667 URL: https://issues.apache.org/jira/browse/HBASE-8667 Project: HBase Issue Type: Bug Components: IPC/RPC Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0, 0.95.2, 0.94.9 While testing HBASE-8640 fix found that master and regionserver running on different interfaces are not communicating properly. I have two interfaces 1) lo 2) eth0 in my machine and default hostname interface is lo. I have configured master ipc address to ip of eth0 interface. Started master and regionserver on the same machine. 1) master rpc server bound to eth0 and RS rpc server bound to lo 2) Since rpc client is not binding to any ip address, when RS is reporting RS startup its getting registered with eth0 ip address(but actually it should register localhost) Here are RS logs: {code} 2013-05-31 06:05:28,608 WARN [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying. 2013-05-31 06:05:31,609 INFO [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to Master server at 192.168.0.100,6,1369960497008 2013-05-31 06:05:31,609 INFO [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 192.168.0.100,6,1369960497008 that we are up with port=60020, startcode=1369960502544 2013-05-31 06:05:31,618 DEBUG [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Config from master: hbase.rootdir=hdfs://localhost:2851/hbase 2013-05-31 06:05:31,618 DEBUG [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Config from master: fs.default.name=hdfs://localhost:2851 2013-05-31 06:05:31,618 INFO [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us a different hostname to use; was=localhost, but now=192.168.0.100 {code} Here are master logs: {code} 2013-05-31 06:05:31,615 INFO [IPC Server handler 9 on 6] org.apache.hadoop.hbase.master.ServerManager: Registering server=192.168.0.100,60020,1369960502544 {code} Since master has wrong rpc server address of RS, META is not getting assigned. {code} 2013-05-31 06:05:34,362 DEBUG [master-192.168.0.100,6,1369960497008] org.apache.hadoop.hbase.master.AssignmentManager: No previous transition plan was found (or we are ignoring an existing plan) for .META.,,1.1028785192 so generated a random one; hri=.META.,,1.1028785192, src=, dest=192.168.0.100,60020,1369960502544; 1 (online=1, available=1) available servers, forceNewPlan=false - org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of .META.,,1.1028785192 to 192.168.0.100,60020,1369960502544, trying to assign elsewhere instead; try=1 of 10 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:549)
[jira] [Updated] (HBASE-8668) TestHLogSplit.generateHLog() does not use local variables for entries
[ https://issues.apache.org/jira/browse/HBASE-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-8668: -- Resolution: Fixed Fix Version/s: 0.95.1 0.98.0 Status: Resolved (was: Patch Available) Committed to trunk and 0.95. Simple patch. TestHLogSplit.generateHLog() does not use local variables for entries - Key: HBASE-8668 URL: https://issues.apache.org/jira/browse/HBASE-8668 Project: HBase Issue Type: Test Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Trivial Fix For: 0.98.0, 0.95.1 Attachments: HBASE-8668.patch {code} private HLog.Writer [] generateHLogs(final int writers, final int entries, final int leaveOpen) throws IOException { return generateHLogs((DistributedFileSystem)this.fs, writers, ENTRIES, leaveOpen); } {code} Here we should use local variable entries instead of ENTRIES. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8408) Implement namespace
[ https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francis Liu updated HBASE-8408: --- Attachment: HBASE-8015_6.patch You will need TestNamespaceUpgrade.tgz in hbase-server/src/test/data to run TestNamespaceUpgrade unit test Implement namespace --- Key: HBASE-8408 URL: https://issues.apache.org/jira/browse/HBASE-8408 Project: HBase Issue Type: Sub-task Reporter: Francis Liu Assignee: Francis Liu Attachments: HBASE-8015_1.patch, HBASE-8015_2.patch, HBASE-8015_3.patch, HBASE-8015_4.patch, HBASE-8015_5.patch, HBASE-8015_6.patch, HBASE-8015.patch, TestNamespaceMigration.tgz -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8408) Implement namespace
[ https://issues.apache.org/jira/browse/HBASE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francis Liu updated HBASE-8408: --- Attachment: TestNamespaceUpgrade.tgz Implement namespace --- Key: HBASE-8408 URL: https://issues.apache.org/jira/browse/HBASE-8408 Project: HBase Issue Type: Sub-task Reporter: Francis Liu Assignee: Francis Liu Attachments: HBASE-8015_1.patch, HBASE-8015_2.patch, HBASE-8015_3.patch, HBASE-8015_4.patch, HBASE-8015_5.patch, HBASE-8015_6.patch, HBASE-8015.patch, TestNamespaceMigration.tgz, TestNamespaceUpgrade.tgz -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8663) a HBase Shell command to list the tables replicated (from or to) current cluster
[ https://issues.apache.org/jira/browse/HBASE-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671671#comment-13671671 ] Demai Ni commented on HBASE-8663: - Anoop, thanks. As you can tell, I am very new here and glad to be part of the community. I will upload the patch for the master next week, which should be straightforward. On the slave side, let me explore your idea first, appreciate the suggestion. What motivated me is the 'requirement' of realonly for the tables replicated on slave. It is kind of hard for an administrator to keep track(except for a note on his/her desk), and not accessable to an applicaton programmer(who can mess up the replicated tables with some simple puts). the problem may show up in a troublesome way for large enterprise level users, who have many application developers in different locations/timezones, and only can access to the slave clusters. With this thought, I am thinking toward the solution for such scenario. And the API/command to show the replicated tables will be the first step. a HBase Shell command to list the tables replicated (from or to) current cluster Key: HBASE-8663 URL: https://issues.apache.org/jira/browse/HBASE-8663 Project: HBase Issue Type: New Feature Components: Replication, shell Environment: clusters setup as Master and Slave for replication of tables Reporter: Demai Ni Priority: Minor This jira is to provide a hbase shell command which can give user can overview of the tables/columnfamilies currently being replicated. The information will help system administrator for design and planning, and also help application programmer to know which tables/columns should be watchout(for example, not to modify a replicated columnfamily on the slave cluster) Currently there is no easy way to tell which table(s)/columnfamily(ies) replicated from or to a particular cluster. On Master Cluster, an indirect method can be used by combining two steps: 1) $describe 'usertable' and 2) $list_peers to map the REPLICATION_SCOPE to target(aka slave) cluster On slave cluster, this is no existing API/methods to list all the tables replicated to this cluster. Here is an example, and prototype for Master cluster {code: title=hbase shell command:list_replicated_tables |borderStyle=solid} hbase(main):001:0 list_replicated_tables TABLE COLUMNFAMILY TARGET_CLUSTER scores coursehdtest017.svl.ibm.com:2181:/hbase t3_dn cf1 hdtest017.svl.ibm.com:2181:/hbase usertable familyhdtest017.svl.ibm.com:2181:/hbase 3 row(s) in 0.3380 seconds {code} {code: title=method to return all columnfamilies replicated from this cluster |borderStyle=solid} /** * ReplicationAdmin.listRepllicated * @return List of the replicated columnfamilies of this cluster for display. * @throws IOException */ public ListString[] listReplicated() throws IOException { ListString[] replicatedColFams = new ArrayListString[](); HTableDescriptor[] tables; tables= this.connection.listTables(); MapString, String peers = listPeers(); for (HTableDescriptor table:tables) { HColumnDescriptor[] columns = table.getColumnFamilies(); String tableName = table.getNameAsString(); for (HColumnDescriptor column: columns) { int scope = column.getScope(); if (scope!=0) { String[] replicatedEntry = new String[3]; replicatedEntry[0] = tableName; replicatedEntry[1] = column.getNameAsString(); replicatedEntry[2] = peers.get(Integer.toString(scope)); replicatedColFams.add(replicatedEntry); } } } return replicatedColFams; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8665) bad compaction priority behavior in queue can cause store to be blocked
[ https://issues.apache.org/jira/browse/HBASE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671675#comment-13671675 ] Sergey Shelukhin commented on HBASE-8665: - Well, the effect of getting a faster compaction in this case is a pure accident, if there was not a smaller one already queued, it would still compact 6 according to policy. Also, out of many possible faster compactions in this case, bad one (later files) is chosen, so it's not really what user would expect. Policy should make such decisions - if we made policy prefer faster compactions for blocked store, we should have it in the policy, and so last-moment selection would still choose the best one. As for bumping the priority of current to what it would have been, it is actually equivalent to just sorting them by current store priority... I wonder if there's any fundamental reason to divorce selection from compaction? If we introduce compaction-based priority modifiers, not just store based, we could still apply them by doing selection in multiple stores and comparing priorities. Selecting is not that expensive, given how frequently we compact. bad compaction priority behavior in queue can cause store to be blocked --- Key: HBASE-8665 URL: https://issues.apache.org/jira/browse/HBASE-8665 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Note that this can be solved by bumping up the number of compaction threads but still it seems like this priority inversion should be dealt with. There's a store with 1 big file and 3 flushes (1 2 3 4) sitting around and minding its own business when it decides to compact. Compaction (2 3 4) is created and put in queue, it's low priority, so it doesn't get out of the queue for some time - other stores are compacting. Meanwhile more files are flushed and at (1 2 3 4 5 6 7) it decides to compact (5 6 7). This compaction now has higher priority than the first one. After that if the load is high it enters vicious cycle of compacting and compacting files as they arrive, with store being blocked on and off, with the (2 3 4) compaction staying in queue for up to ~20 minutes (that I've seen). I wonder why we do thing thing where we queue compaction and compact separately. Perhaps we should take snapshot of all store priorities, then do select in order and execute the first compaction we find. This will need starvation safeguard too but should probably be better. Btw, exploring compaction policy may be more prone to this, as it can select files from the middle, not just beginning, which, given the treatment of already selected files that was not changed from the old ratio-based one (all files with lower seqNums than the ones selected are also ineligible for further selection), will make more files ineligible (e.g. imagine with 10 blocking files, with 8 present (1-8), (6 7 8) being selected and getting stuck). Today I see the case that would also apply to old policy, but yesterday I saw file distribution something like this: 4,5g, 2,1g, 295,9m, 113,3m, 68,0m, 67,8m, 1,1g, 295,1m, 100,4m, unfortunately w/o enough logs to figure out how it resulted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-8665) bad compaction priority behavior in queue can cause store to be blocked
[ https://issues.apache.org/jira/browse/HBASE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671675#comment-13671675 ] Sergey Shelukhin edited comment on HBASE-8665 at 5/31/13 5:32 PM: -- Well, the effect of getting a faster compaction in this case is a pure accident, if there was not a smaller one already queued, it would still compact 6 according to policy. Also, out of many possible faster compactions in this case, bad one (later files) is chosen, so it's not really what user would expect. Policy should make such decisions - if we prefer faster compactions for blocked store, we should have it in the policy, and so last-moment selection would still choose the best one. As for bumping the priority of current to what it would have been, it is actually equivalent to just sorting them by current store priority... I wonder if there's any fundamental reason to divorce selection from compaction? If we introduce compaction-based priority modifiers, not just store based, we could still apply them by doing selection in multiple stores and comparing priorities. Selecting is not that expensive, given how frequently we compact. was (Author: sershe): Well, the effect of getting a faster compaction in this case is a pure accident, if there was not a smaller one already queued, it would still compact 6 according to policy. Also, out of many possible faster compactions in this case, bad one (later files) is chosen, so it's not really what user would expect. Policy should make such decisions - if we made policy prefer faster compactions for blocked store, we should have it in the policy, and so last-moment selection would still choose the best one. As for bumping the priority of current to what it would have been, it is actually equivalent to just sorting them by current store priority... I wonder if there's any fundamental reason to divorce selection from compaction? If we introduce compaction-based priority modifiers, not just store based, we could still apply them by doing selection in multiple stores and comparing priorities. Selecting is not that expensive, given how frequently we compact. bad compaction priority behavior in queue can cause store to be blocked --- Key: HBASE-8665 URL: https://issues.apache.org/jira/browse/HBASE-8665 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Note that this can be solved by bumping up the number of compaction threads but still it seems like this priority inversion should be dealt with. There's a store with 1 big file and 3 flushes (1 2 3 4) sitting around and minding its own business when it decides to compact. Compaction (2 3 4) is created and put in queue, it's low priority, so it doesn't get out of the queue for some time - other stores are compacting. Meanwhile more files are flushed and at (1 2 3 4 5 6 7) it decides to compact (5 6 7). This compaction now has higher priority than the first one. After that if the load is high it enters vicious cycle of compacting and compacting files as they arrive, with store being blocked on and off, with the (2 3 4) compaction staying in queue for up to ~20 minutes (that I've seen). I wonder why we do thing thing where we queue compaction and compact separately. Perhaps we should take snapshot of all store priorities, then do select in order and execute the first compaction we find. This will need starvation safeguard too but should probably be better. Btw, exploring compaction policy may be more prone to this, as it can select files from the middle, not just beginning, which, given the treatment of already selected files that was not changed from the old ratio-based one (all files with lower seqNums than the ones selected are also ineligible for further selection), will make more files ineligible (e.g. imagine with 10 blocking files, with 8 present (1-8), (6 7 8) being selected and getting stuck). Today I see the case that would also apply to old policy, but yesterday I saw file distribution something like this: 4,5g, 2,1g, 295,9m, 113,3m, 68,0m, 67,8m, 1,1g, 295,1m, 100,4m, unfortunately w/o enough logs to figure out how it resulted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8669) change exploring compaction policy to prefer smaller compactions on blocked stores
Sergey Shelukhin created HBASE-8669: --- Summary: change exploring compaction policy to prefer smaller compactions on blocked stores Key: HBASE-8669 URL: https://issues.apache.org/jira/browse/HBASE-8669 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Side-note from HBASE-8665 discussion. When we compact a blocked store, we might want to use a different heuristic to choose between the options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8669) change exploring compaction policy to prefer smaller compactions on blocked stores
[ https://issues.apache.org/jira/browse/HBASE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-8669: Issue Type: Improvement (was: Bug) change exploring compaction policy to prefer smaller compactions on blocked stores -- Key: HBASE-8669 URL: https://issues.apache.org/jira/browse/HBASE-8669 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Side-note from HBASE-8665 discussion. When we compact a blocked store, we might want to use a different heuristic to choose between the options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8669) change exploring compaction policy to prefer smaller compactions on blocked stores
[ https://issues.apache.org/jira/browse/HBASE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-8669: Priority: Minor (was: Major) change exploring compaction policy to prefer smaller compactions on blocked stores -- Key: HBASE-8669 URL: https://issues.apache.org/jira/browse/HBASE-8669 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Side-note from HBASE-8665 discussion. When we compact a blocked store, we might want to use a different heuristic to choose between the options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8635) Define prefetcher.resultsize.max as percentage
[ https://issues.apache.org/jira/browse/HBASE-8635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671701#comment-13671701 ] Jean-Daniel Cryans commented on HBASE-8635: --- I agree that prefetching should be checked in checkForClusterFreeMemoryLimit() but one thing that worries me is all those users that already use 80% of their heap with BC and MemStores that won't be able to restart HBase once they upgrade because they'd now be at 90% with pre-fetching. Next thing that worries me is that it's not clear to me that prefetching needs to scale with the amount of memory given to HBase. 10% of 1GB is ~100MB, of 10GB it's 1GB and of 24GB it's 2.4GB... that's a lot! Realistically, the main case I can think of that would do a lot of concurrent long scans is TIF. Let's say your TTs have 12 map slots, so you have as many scanners, and each of them read batches of 10MB (which is a lot), then you only need 12*10MB = 120MB for prefetching. I also voiced that concern in HBASE-8420 that even 256MB seems too big. Define prefetcher.resultsize.max as percentage -- Key: HBASE-8635 URL: https://issues.apache.org/jira/browse/HBASE-8635 Project: HBase Issue Type: Improvement Reporter: Ted Yu Assignee: Jimmy Xiang Priority: Minor Attachments: trunk-8635.patch Currently hbase.hregionserver.prefetcher.resultsize.max defines global limit for prefetching. The default value is 256MB. It would be more flexible to define this measure as a percentage of the heap. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8344) Improve the assignment when node failures happen to choose the secondary RS as the new primary RS
[ https://issues.apache.org/jira/browse/HBASE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated HBASE-8344: --- Attachment: hbase-8344-2.3.txt This patch has the following updates w.r.t the previous patch: 1. Fixes the issue pointed out in the last comment (I had made an assumption about the availability of the primary RS which may not be always true). 2. Makes the testcases more tight. 3. Refactors the code a bit to move out common code into a method.. Improve the assignment when node failures happen to choose the secondary RS as the new primary RS - Key: HBASE-8344 URL: https://issues.apache.org/jira/browse/HBASE-8344 Project: HBase Issue Type: Sub-task Reporter: Devaraj Das Assignee: Devaraj Das Priority: Critical Fix For: 0.95.2 Attachments: hbase-8344-1.txt, hbase-8344-2.1.txt, hbase-8344-2.2.txt, hbase-8344-2.3.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8669) change exploring compaction policy to prefer smaller compactions on blocked stores
[ https://issues.apache.org/jira/browse/HBASE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-8669: Attachment: HBASE-8669-v0.patch First attempt at the patch. The early-preference is necessary to avoid choosing smallest compactions in cases like 50 49 48 49 50, and getting screwed later because all files are dissimilar. This could be solved better in a more complicated manner by planning future selections, or looking at preceding/following files, but that would be an overkill imho change exploring compaction policy to prefer smaller compactions on blocked stores -- Key: HBASE-8669 URL: https://issues.apache.org/jira/browse/HBASE-8669 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Attachments: HBASE-8669-v0.patch Side-note from HBASE-8665 discussion. When we compact a blocked store, we might want to use a different heuristic to choose between the options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8669) change exploring compaction policy to prefer smaller compactions on blocked stores
[ https://issues.apache.org/jira/browse/HBASE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HBASE-8669: Status: Patch Available (was: Open) change exploring compaction policy to prefer smaller compactions on blocked stores -- Key: HBASE-8669 URL: https://issues.apache.org/jira/browse/HBASE-8669 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Attachments: HBASE-8669-v0.patch Side-note from HBASE-8665 discussion. When we compact a blocked store, we might want to use a different heuristic to choose between the options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8665) bad compaction priority behavior in queue can cause store to be blocked
[ https://issues.apache.org/jira/browse/HBASE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671731#comment-13671731 ] Elliott Clark commented on HBASE-8665: -- bq.Policy should make such decisions - if we prefer faster compactions for blocked store, we should have it in the policy, and so last-moment selection would still choose the best one. We already do that. If we think we're blocked then the exploring compaction policy chooses the smallest set of files. bad compaction priority behavior in queue can cause store to be blocked --- Key: HBASE-8665 URL: https://issues.apache.org/jira/browse/HBASE-8665 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Note that this can be solved by bumping up the number of compaction threads but still it seems like this priority inversion should be dealt with. There's a store with 1 big file and 3 flushes (1 2 3 4) sitting around and minding its own business when it decides to compact. Compaction (2 3 4) is created and put in queue, it's low priority, so it doesn't get out of the queue for some time - other stores are compacting. Meanwhile more files are flushed and at (1 2 3 4 5 6 7) it decides to compact (5 6 7). This compaction now has higher priority than the first one. After that if the load is high it enters vicious cycle of compacting and compacting files as they arrive, with store being blocked on and off, with the (2 3 4) compaction staying in queue for up to ~20 minutes (that I've seen). I wonder why we do thing thing where we queue compaction and compact separately. Perhaps we should take snapshot of all store priorities, then do select in order and execute the first compaction we find. This will need starvation safeguard too but should probably be better. Btw, exploring compaction policy may be more prone to this, as it can select files from the middle, not just beginning, which, given the treatment of already selected files that was not changed from the old ratio-based one (all files with lower seqNums than the ones selected are also ineligible for further selection), will make more files ineligible (e.g. imagine with 10 blocking files, with 8 present (1-8), (6 7 8) being selected and getting stuck). Today I see the case that would also apply to old policy, but yesterday I saw file distribution something like this: 4,5g, 2,1g, 295,9m, 113,3m, 68,0m, 67,8m, 1,1g, 295,1m, 100,4m, unfortunately w/o enough logs to figure out how it resulted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8667) Master and Regionserver not able to communicate if both bound to different network interfaces on the same machine.
[ https://issues.apache.org/jira/browse/HBASE-8667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671750#comment-13671750 ] rajeshbabu commented on HBASE-8667: --- bq. So here server @RS was bound to a different hostname/ip but the master sees RS identity as another. Master looks at connection's remote host name to decide what is the hostname of the RS. Yes Anoop, exactly this is happening. Presently no address bound to rpc client socket so connections remote hostname is deciding by the interface from which the communication is happening. What about an idea of binding master and RS ipc addresses(port is 0) to the rpcclient sockets in master and RS to avoid problems like this issue? Master and Regionserver not able to communicate if both bound to different network interfaces on the same machine. -- Key: HBASE-8667 URL: https://issues.apache.org/jira/browse/HBASE-8667 Project: HBase Issue Type: Bug Components: IPC/RPC Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0, 0.95.2, 0.94.9 While testing HBASE-8640 fix found that master and regionserver running on different interfaces are not communicating properly. I have two interfaces 1) lo 2) eth0 in my machine and default hostname interface is lo. I have configured master ipc address to ip of eth0 interface. Started master and regionserver on the same machine. 1) master rpc server bound to eth0 and RS rpc server bound to lo 2) Since rpc client is not binding to any ip address, when RS is reporting RS startup its getting registered with eth0 ip address(but actually it should register localhost) Here are RS logs: {code} 2013-05-31 06:05:28,608 WARN [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying. 2013-05-31 06:05:31,609 INFO [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect to Master server at 192.168.0.100,6,1369960497008 2013-05-31 06:05:31,609 INFO [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at 192.168.0.100,6,1369960497008 that we are up with port=60020, startcode=1369960502544 2013-05-31 06:05:31,618 DEBUG [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Config from master: hbase.rootdir=hdfs://localhost:2851/hbase 2013-05-31 06:05:31,618 DEBUG [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Config from master: fs.default.name=hdfs://localhost:2851 2013-05-31 06:05:31,618 INFO [regionserver60020] org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us a different hostname to use; was=localhost, but now=192.168.0.100 {code} Here are master logs: {code} 2013-05-31 06:05:31,615 INFO [IPC Server handler 9 on 6] org.apache.hadoop.hbase.master.ServerManager: Registering server=192.168.0.100,60020,1369960502544 {code} Since master has wrong rpc server address of RS, META is not getting assigned. {code} 2013-05-31 06:05:34,362 DEBUG [master-192.168.0.100,6,1369960497008] org.apache.hadoop.hbase.master.AssignmentManager: No previous transition plan was found (or we are ignoring an existing plan) for .META.,,1.1028785192 so generated a random one; hri=.META.,,1.1028785192, src=, dest=192.168.0.100,60020,1369960502544; 1 (online=1, available=1) available servers, forceNewPlan=false - org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of .META.,,1.1028785192 to 192.168.0.100,60020,1369960502544, trying to assign elsewhere instead; try=1 of 10 java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupConnection(RpcClient.java:549) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:813) at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1422) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1315) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1532) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1587) at
[jira] [Commented] (HBASE-8344) Improve the assignment when node failures happen to choose the secondary RS as the new primary RS
[ https://issues.apache.org/jira/browse/HBASE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671769#comment-13671769 ] Hadoop QA commented on HBASE-8344: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585651/hbase-8344-2.3.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 1 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.TestIOFencing Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5900//console This message is automatically generated. Improve the assignment when node failures happen to choose the secondary RS as the new primary RS - Key: HBASE-8344 URL: https://issues.apache.org/jira/browse/HBASE-8344 Project: HBase Issue Type: Sub-task Reporter: Devaraj Das Assignee: Devaraj Das Priority: Critical Fix For: 0.95.2 Attachments: hbase-8344-1.txt, hbase-8344-2.1.txt, hbase-8344-2.2.txt, hbase-8344-2.3.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8669) change exploring compaction policy to prefer smaller compactions on blocked stores
[ https://issues.apache.org/jira/browse/HBASE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671785#comment-13671785 ] Hadoop QA commented on HBASE-8669: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585654/HBASE-8669-v0.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5901//console This message is automatically generated. change exploring compaction policy to prefer smaller compactions on blocked stores -- Key: HBASE-8669 URL: https://issues.apache.org/jira/browse/HBASE-8669 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Attachments: HBASE-8669-v0.patch Side-note from HBASE-8665 discussion. When we compact a blocked store, we might want to use a different heuristic to choose between the options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8665) bad compaction priority behavior in queue can cause store to be blocked
[ https://issues.apache.org/jira/browse/HBASE-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671810#comment-13671810 ] Sergey Shelukhin commented on HBASE-8665: - It only does that if nothing is in ratio. I filed a JIRA for that... bad compaction priority behavior in queue can cause store to be blocked --- Key: HBASE-8665 URL: https://issues.apache.org/jira/browse/HBASE-8665 Project: HBase Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Note that this can be solved by bumping up the number of compaction threads but still it seems like this priority inversion should be dealt with. There's a store with 1 big file and 3 flushes (1 2 3 4) sitting around and minding its own business when it decides to compact. Compaction (2 3 4) is created and put in queue, it's low priority, so it doesn't get out of the queue for some time - other stores are compacting. Meanwhile more files are flushed and at (1 2 3 4 5 6 7) it decides to compact (5 6 7). This compaction now has higher priority than the first one. After that if the load is high it enters vicious cycle of compacting and compacting files as they arrive, with store being blocked on and off, with the (2 3 4) compaction staying in queue for up to ~20 minutes (that I've seen). I wonder why we do thing thing where we queue compaction and compact separately. Perhaps we should take snapshot of all store priorities, then do select in order and execute the first compaction we find. This will need starvation safeguard too but should probably be better. Btw, exploring compaction policy may be more prone to this, as it can select files from the middle, not just beginning, which, given the treatment of already selected files that was not changed from the old ratio-based one (all files with lower seqNums than the ones selected are also ineligible for further selection), will make more files ineligible (e.g. imagine with 10 blocking files, with 8 present (1-8), (6 7 8) being selected and getting stuck). Today I see the case that would also apply to old policy, but yesterday I saw file distribution something like this: 4,5g, 2,1g, 295,9m, 113,3m, 68,0m, 67,8m, 1,1g, 295,1m, 100,4m, unfortunately w/o enough logs to figure out how it resulted. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8344) Improve the assignment when node failures happen to choose the secondary RS as the new primary RS
[ https://issues.apache.org/jira/browse/HBASE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated HBASE-8344: --- Attachment: hbase-8344-2.4.txt Fixes the javadoc issue. Improve the assignment when node failures happen to choose the secondary RS as the new primary RS - Key: HBASE-8344 URL: https://issues.apache.org/jira/browse/HBASE-8344 Project: HBase Issue Type: Sub-task Reporter: Devaraj Das Assignee: Devaraj Das Priority: Critical Fix For: 0.95.2 Attachments: hbase-8344-1.txt, hbase-8344-2.1.txt, hbase-8344-2.2.txt, hbase-8344-2.3.txt, hbase-8344-2.4.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8669) change exploring compaction policy to prefer smaller compactions on blocked stores
[ https://issues.apache.org/jira/browse/HBASE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671837#comment-13671837 ] Sergey Shelukhin commented on HBASE-8669: - [~eclark] wdyt? change exploring compaction policy to prefer smaller compactions on blocked stores -- Key: HBASE-8669 URL: https://issues.apache.org/jira/browse/HBASE-8669 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Minor Attachments: HBASE-8669-v0.patch Side-note from HBASE-8665 discussion. When we compact a blocked store, we might want to use a different heuristic to choose between the options. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8344) Improve the assignment when node failures happen to choose the secondary RS as the new primary RS
[ https://issues.apache.org/jira/browse/HBASE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671824#comment-13671824 ] Ted Yu commented on HBASE-8344: --- {code} + public static Position getFavoredServerPosition( + ListServerName favoredNodes, ServerName server) { +if (favoredNodes == null || server == null || +favoredNodes.size() != FavoredNodeAssignmentHelper.FAVORED_NODES_NUM) { {code} Should the condition in the last line be: {code} favoredNodes.size() = FavoredNodeAssignmentHelper.FAVORED_NODES_NUM {code} This way we can utilize some of the favored nodes. Improve the assignment when node failures happen to choose the secondary RS as the new primary RS - Key: HBASE-8344 URL: https://issues.apache.org/jira/browse/HBASE-8344 Project: HBase Issue Type: Sub-task Reporter: Devaraj Das Assignee: Devaraj Das Priority: Critical Fix For: 0.95.2 Attachments: hbase-8344-1.txt, hbase-8344-2.1.txt, hbase-8344-2.2.txt, hbase-8344-2.3.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8598) enhance multithreadedaction/reader/writer to test better
[ https://issues.apache.org/jira/browse/HBASE-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671841#comment-13671841 ] Ted Yu commented on HBASE-8598: --- {code} + * The class that provides optional request metrics to load-test-like tools. + * Methods are not thread safe unless otherwise noted. + */ +public class MultiActionMetrics { {code} Should the class be called MultiActionMetricsProvider ? {code} +baseTime = System.currentTimeMillis(); {code} Please use EnvironmentEdge. {code} + /** @return Combined latency CDF for all requests from all threads. Call after test. */ + public ArrayListPairLong, Long getCombinedCdf() { {code} Add javadoc for the meaning of return value. enhance multithreadedaction/reader/writer to test better Key: HBASE-8598 URL: https://issues.apache.org/jira/browse/HBASE-8598 Project: HBase Issue Type: Improvement Components: test Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-8598-v0.patch To be able to test more varied scenarios, I am adding delete and overwrite threads to writer; adding random read thread to reader; and adding some machine-oriented (csv) metric collection (QPS, histogram of all requests during the tests) because grepping logs for such stuff is PITA. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8670) [0.94] Backport HBASE-8449 to 0.94 (Refactor recoverLease retries and pauses)
Enis Soztutar created HBASE-8670: Summary: [0.94] Backport HBASE-8449 to 0.94 (Refactor recoverLease retries and pauses) Key: HBASE-8670 URL: https://issues.apache.org/jira/browse/HBASE-8670 Project: HBase Issue Type: Bug Components: Filesystem Integration, master, wal Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.94.9 Some history: Up until 0.94.8, Hbase did not check the result of recoverLease() call, but things kind of worked since we are checking for 0-length files in distributed log split tasks from region servers. If lease recovery is not finished, the log file will report 0 length, and the task will fail, and master will then re-call recoverLease() and reassign the task. This scheme might fail for log files that are larger than 1 hdfs block though. In 0.94.8, we committed (HBASE-8354, which is backport of HBASE-7878) and later increased the sleep time to 4 secs in HBASE-8389. However, the proper solution arrived in trunk in HBASE-8449 which uses a backoff sleep policy + isFileClosed() api. We should backport this patch to 0.94 as well. isFileClosed() is released in Hadoop 1.2.0 (HDFS-4774) and 2.0.5(HDFS-4525). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8598) enhance multithreadedaction/reader/writer to test better
[ https://issues.apache.org/jira/browse/HBASE-8598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671848#comment-13671848 ] Ted Yu commented on HBASE-8598: --- {code} +/** Enables tracking the last inserted key. */ +void setTrackInsertedKeys(); {code} Looks like tracking inserted keys would always be enabled but not disabled. Name the above method enableTrackingInsertedKeys() ? enhance multithreadedaction/reader/writer to test better Key: HBASE-8598 URL: https://issues.apache.org/jira/browse/HBASE-8598 Project: HBase Issue Type: Improvement Components: test Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-8598-v0.patch To be able to test more varied scenarios, I am adding delete and overwrite threads to writer; adding random read thread to reader; and adding some machine-oriented (csv) metric collection (QPS, histogram of all requests during the tests) because grepping logs for such stuff is PITA. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8666) META region isn't fully recovered during master initialization when META region recovery had chained failures
[ https://issues.apache.org/jira/browse/HBASE-8666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeffrey Zhong updated HBASE-8666: - Attachment: hbase-8666-v2.patch Incorporated Ted's comments and reworked the fix a little bit to handle one more case where when all region servers are down, SSH tries to assignMeta and will fail because there is no available RS. After that master restarts, in that situation oldMetaLocation=null which will skip meta recovering. META region isn't fully recovered during master initialization when META region recovery had chained failures - Key: HBASE-8666 URL: https://issues.apache.org/jira/browse/HBASE-8666 Project: HBase Issue Type: Bug Components: MTTR Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.98.0, 0.95.2 Attachments: hbase-8666.patch, hbase-8666-v2.patch In distributedLogReplay mode when Meta recovery had experienced chained failures(recovery failed multiple times in a row), META region can't be fully recovered during master starts up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8344) Improve the assignment when node failures happen to choose the secondary RS as the new primary RS
[ https://issues.apache.org/jira/browse/HBASE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671859#comment-13671859 ] Devaraj Das commented on HBASE-8344: bq. This way we can utilize some of the favored nodes. Ted, I don't get it.. Anyway this method is just returning the position of the server in the list (after validating that it is of size 3). It doesn't do anything other than that. Improve the assignment when node failures happen to choose the secondary RS as the new primary RS - Key: HBASE-8344 URL: https://issues.apache.org/jira/browse/HBASE-8344 Project: HBase Issue Type: Sub-task Reporter: Devaraj Das Assignee: Devaraj Das Priority: Critical Fix For: 0.95.2 Attachments: hbase-8344-1.txt, hbase-8344-2.1.txt, hbase-8344-2.2.txt, hbase-8344-2.3.txt, hbase-8344-2.4.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8344) Improve the assignment when node failures happen to choose the secondary RS as the new primary RS
[ https://issues.apache.org/jira/browse/HBASE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671872#comment-13671872 ] Hadoop QA commented on HBASE-8344: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585668/hbase-8344-2.4.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5902//console This message is automatically generated. Improve the assignment when node failures happen to choose the secondary RS as the new primary RS - Key: HBASE-8344 URL: https://issues.apache.org/jira/browse/HBASE-8344 Project: HBase Issue Type: Sub-task Reporter: Devaraj Das Assignee: Devaraj Das Priority: Critical Fix For: 0.95.2 Attachments: hbase-8344-1.txt, hbase-8344-2.1.txt, hbase-8344-2.2.txt, hbase-8344-2.3.txt, hbase-8344-2.4.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8671) Per-region WAL breaks CP backwards compatibility for non-enabled case
Jesse Yates created HBASE-8671: -- Summary: Per-region WAL breaks CP backwards compatibility for non-enabled case Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Reporter: Jesse Yates Assignee: Jesse Yates Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8671) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesse Yates updated HBASE-8671: --- Summary: Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case (was: Per-region WAL breaks 0.94 CP backwards compatibility for non-enabled case) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case - Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Reporter: Jesse Yates Assignee: Jesse Yates Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8671) Per-region WAL breaks 0.94 CP backwards compatibility for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesse Yates updated HBASE-8671: --- Summary: Per-region WAL breaks 0.94 CP backwards compatibility for non-enabled case (was: Per-region WAL breaks CP backwards compatibility for non-enabled case) Per-region WAL breaks 0.94 CP backwards compatibility for non-enabled case -- Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Reporter: Jesse Yates Assignee: Jesse Yates Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-8672) Create an Integration test for Bulk Loads
Elliott Clark created HBASE-8672: Summary: Create an Integration test for Bulk Loads Key: HBASE-8672 URL: https://issues.apache.org/jira/browse/HBASE-8672 Project: HBase Issue Type: Bug Reporter: Elliott Clark Assignee: Elliott Clark Bulk loads and MR are not well tested using our IT tests. We should add a test that bulk loads hfiles and then scans over the resulting table to make sure that all the data is there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8671) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesse Yates updated HBASE-8671: --- Attachment: hbase-8671-v0.patch A (simple) couple line patch that just adds the signature back in, since its still supported in HRegionServer and does the correct thing (passes null to getWAL(HRegionInfo)). Hoping to check this in early next week, if there aren't any objections. Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case - Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Reporter: Jesse Yates Assignee: Jesse Yates Attachments: hbase-8671-v0.patch Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8671) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesse Yates updated HBASE-8671: --- Affects Version/s: 0.94.9 Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case - Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Affects Versions: 0.94.9 Reporter: Jesse Yates Assignee: Jesse Yates Fix For: 0.94.9 Attachments: hbase-8671-v0.patch Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8671) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesse Yates updated HBASE-8671: --- Fix Version/s: 0.94.9 Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case - Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Reporter: Jesse Yates Assignee: Jesse Yates Fix For: 0.94.9 Attachments: hbase-8671-v0.patch Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8672) Create an Integration test for Bulk Loads
[ https://issues.apache.org/jira/browse/HBASE-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-8672: - Attachment: HBASE-8672-0.patch Here's a path that should test bulk loads. It starts an MR job that creates linked chains The format of rows is like this: rk - Long d: Chain Id - Random Data. l: Chain Id - Row Key of the next link in the chain s: Chain Id - The step in the chain that his link is. All chains start on row 0. So we create these chains and then walk over them in an MR job. Create an Integration test for Bulk Loads - Key: HBASE-8672 URL: https://issues.apache.org/jira/browse/HBASE-8672 Project: HBase Issue Type: Bug Reporter: Elliott Clark Assignee: Elliott Clark Attachments: HBASE-8672-0.patch Bulk loads and MR are not well tested using our IT tests. We should add a test that bulk loads hfiles and then scans over the resulting table to make sure that all the data is there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8671) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671888#comment-13671888 ] Lars Hofhansl commented on HBASE-8671: -- Not enough to sink the current RC IMHO. Comments? This was introduced in 0.94.7 with HBASE-8081. Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case - Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Affects Versions: 0.94.9 Reporter: Jesse Yates Assignee: Jesse Yates Fix For: 0.94.9 Attachments: hbase-8671-v0.patch Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8344) Improve the assignment when node failures happen to choose the secondary RS as the new primary RS
[ https://issues.apache.org/jira/browse/HBASE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671887#comment-13671887 ] Ted Yu commented on HBASE-8344: --- {code} +FavoredNodes.Position position = +FavoredNodes.getFavoredServerPosition(favoredNodes, s); +if (position.equals(Position.PRIMARY)) { + primaryHost = serverWithLegitStartCode; {code} If favoredNodes.size() = FavoredNodeAssignmentHelper.FAVORED_NODES_NUM, getFavoredServerPosition() would return null. The if check in above snippet would become: {code} null.equals(Position.PRIMARY) {code} I think the following should be used: {code} +if (Position.PRIMARY.equals(position)) { {code} Improve the assignment when node failures happen to choose the secondary RS as the new primary RS - Key: HBASE-8344 URL: https://issues.apache.org/jira/browse/HBASE-8344 Project: HBase Issue Type: Sub-task Reporter: Devaraj Das Assignee: Devaraj Das Priority: Critical Fix For: 0.95.2 Attachments: hbase-8344-1.txt, hbase-8344-2.1.txt, hbase-8344-2.2.txt, hbase-8344-2.3.txt, hbase-8344-2.4.txt -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8671) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671889#comment-13671889 ] Jesse Yates commented on HBASE-8671: If no one noticed in the last release, then its probably not a major problem. I'm happy to wait until 0.94.9, unless someone has a bug. Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case - Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Affects Versions: 0.94.9 Reporter: Jesse Yates Assignee: Jesse Yates Fix For: 0.94.9 Attachments: hbase-8671-v0.patch Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8607) Run HBase server in an OSGi container
[ https://issues.apache.org/jira/browse/HBASE-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671896#comment-13671896 ] James Taylor commented on HBASE-8607: - My thinking is that an OSGi container would allow a new version of a coprocessor (and/or custom filter) jar to be loaded. Class conflicts between the old jar and the new jar would no longer be a problem - you'd never need to unload the old jar. Instead, future HBase operations that invoke the coprocessor would cause the newly loaded jar to be used instead of the older one. I'm not sure if this is possible or not. The whole idea would be to prevent a rolling restart or region close/reopen. Run HBase server in an OSGi container - Key: HBASE-8607 URL: https://issues.apache.org/jira/browse/HBASE-8607 Project: HBase Issue Type: New Feature Components: regionserver Reporter: James Taylor Run the HBase server in an OSGi container to support updating custom filters and coprocessor updates without requiring a region server reboot. Typically, applications that use coprocessors and custom filters also have shared classes underneath, so putting the burden on the user to include some kind of version name in the class is not adequate. Including the version name in the package might work in some cases (at least until dependent jars start to change as well), but is cumbersome and overburdens the app developer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8671) Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case
[ https://issues.apache.org/jira/browse/HBASE-8671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671895#comment-13671895 ] Ted Yu commented on HBASE-8671: --- +1 on patch. Per-region WAL breaks CP backwards compatibility in 0.94 for non-enabled case - Key: HBASE-8671 URL: https://issues.apache.org/jira/browse/HBASE-8671 Project: HBase Issue Type: Bug Affects Versions: 0.94.9 Reporter: Jesse Yates Assignee: Jesse Yates Fix For: 0.94.9 Attachments: hbase-8671-v0.patch Moving from a single WAL to the possibility of multiple WALs, the method signature in RegionServerServices became: {code} /** @return the HLog for a particular region. Pass null for getting the * default (common) WAL */ public HLog getWAL(HRegionInfo regionInfo) throws IOException; {code} However, CPs that previously needed access to the WAL would just call: {code} RegionServerServices.getWAL(); {code} Which is equivalent to calling: {code} RegionServerServices.getWAL(null); {code} but which requires a code change, recompilation, and possibly an additional compatibility layer for _different versions of 0.94_... not a great situation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8666) META region isn't fully recovered during master initialization when META region recovery had chained failures
[ https://issues.apache.org/jira/browse/HBASE-8666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671899#comment-13671899 ] Hadoop QA commented on HBASE-8666: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12585674/hbase-8666-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/5903//console This message is automatically generated. META region isn't fully recovered during master initialization when META region recovery had chained failures - Key: HBASE-8666 URL: https://issues.apache.org/jira/browse/HBASE-8666 Project: HBase Issue Type: Bug Components: MTTR Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.98.0, 0.95.2 Attachments: hbase-8666.patch, hbase-8666-v2.patch In distributedLogReplay mode when Meta recovery had experienced chained failures(recovery failed multiple times in a row), META region can't be fully recovered during master starts up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8666) META region isn't fully recovered during master initialization when META region recovery had chained failures
[ https://issues.apache.org/jira/browse/HBASE-8666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671902#comment-13671902 ] Ted Yu commented on HBASE-8666: --- +1 META region isn't fully recovered during master initialization when META region recovery had chained failures - Key: HBASE-8666 URL: https://issues.apache.org/jira/browse/HBASE-8666 Project: HBase Issue Type: Bug Components: MTTR Reporter: Jeffrey Zhong Assignee: Jeffrey Zhong Fix For: 0.98.0, 0.95.2 Attachments: hbase-8666.patch, hbase-8666-v2.patch In distributedLogReplay mode when Meta recovery had experienced chained failures(recovery failed multiple times in a row), META region can't be fully recovered during master starts up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8672) Create an Integration test for Bulk Loads
[ https://issues.apache.org/jira/browse/HBASE-8672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-8672: - Affects Version/s: 0.95.1 0.98.0 Status: Patch Available (was: Open) Create an Integration test for Bulk Loads - Key: HBASE-8672 URL: https://issues.apache.org/jira/browse/HBASE-8672 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.95.1 Reporter: Elliott Clark Assignee: Elliott Clark Attachments: HBASE-8672-0.patch Bulk loads and MR are not well tested using our IT tests. We should add a test that bulk loads hfiles and then scans over the resulting table to make sure that all the data is there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services
[ https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671912#comment-13671912 ] James Taylor commented on HBASE-1936: - We're looking into leveraging this new feature to ease the installation of Phoenix (https://github.com/forcedotcom/phoenix). Currently we require that the phoenix jar be copied into the HBase lib dir of every region server, followed by a restart. For some background, Phoenix uses both coprocessors and custom filters. These are just the tip of the iceberg, so to speak. There's a ton of shared/foundational phoenix code being used by these coprocessors and filters - our type system, expression evaluation, schema interpretation, throttling code, memory management, etc. So when we say we'd like to upgrade our coprocessor and custom filters to a new version, that means all the foundational classes under it have changed as well. If we use this new feature, we're not sure we're easing the burden on our users, since users will still need to: 1) update the hbase-sites.xml on each region server to set the hbase.dynamics.jar.dir path of the jar 2) copy the phoenix jar to hdfs 3) make a sym link to the new phoenix jar 4) get a rolling restart to be done on the cluster My fear would be that (1) would be error prone, and for (2) (3) the user wouldn't have the necessary perms. And (4), we'll probably just have to live with, but in a utopia, we could just have the new jar be used for new coprocessor/filter invocations. My question: how close can we come to automating all of this to the point where we could have a phoenix install script that looks like this: hbase install phoenix-1.2.jar Is HBASE-8400 a prerequisite? Any other missing pieces? We'd be happy to be a guinea pig/test case for how to solve this problem from a real application/platform standpoint. Thanks! ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services --- Key: HBASE-1936 URL: https://issues.apache.org/jira/browse/HBASE-1936 Project: HBase Issue Type: New Feature Reporter: stack Assignee: Jimmy Xiang Labels: noob Fix For: 0.98.0, 0.94.7, 0.95.1 Attachments: 0.94-1936.patch, cp_from_hdfs.patch, HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8668) TestHLogSplit.generateHLog() does not use local variables for entries
[ https://issues.apache.org/jira/browse/HBASE-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671913#comment-13671913 ] Hudson commented on HBASE-8668: --- Integrated in HBase-TRUNK #4153 (See [https://builds.apache.org/job/HBase-TRUNK/4153/]) HBASE-8668 - TestHLogSplit.generateHLog() does not use local variables for entries (Ram) (Revision 1488313) Result = SUCCESS ramkrishna : Files : * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java TestHLogSplit.generateHLog() does not use local variables for entries - Key: HBASE-8668 URL: https://issues.apache.org/jira/browse/HBASE-8668 Project: HBase Issue Type: Test Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Trivial Fix For: 0.98.0, 0.95.1 Attachments: HBASE-8668.patch {code} private HLog.Writer [] generateHLogs(final int writers, final int entries, final int leaveOpen) throws IOException { return generateHLogs((DistributedFileSystem)this.fs, writers, ENTRIES, leaveOpen); } {code} Here we should use local variable entries instead of ENTRIES. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8670) [0.94] Backport HBASE-8449 and HBASE-8204 to 0.94 (Refactor recoverLease retries and pauses)
[ https://issues.apache.org/jira/browse/HBASE-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8670: - Summary: [0.94] Backport HBASE-8449 and HBASE-8204 to 0.94 (Refactor recoverLease retries and pauses) (was: [0.94] Backport HBASE-8449 to 0.94 (Refactor recoverLease retries and pauses)) [0.94] Backport HBASE-8449 and HBASE-8204 to 0.94 (Refactor recoverLease retries and pauses) Key: HBASE-8670 URL: https://issues.apache.org/jira/browse/HBASE-8670 Project: HBase Issue Type: Bug Components: Filesystem Integration, master, wal Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.94.9 Some history: Up until 0.94.8, Hbase did not check the result of recoverLease() call, but things kind of worked since we are checking for 0-length files in distributed log split tasks from region servers. If lease recovery is not finished, the log file will report 0 length, and the task will fail, and master will then re-call recoverLease() and reassign the task. This scheme might fail for log files that are larger than 1 hdfs block though. In 0.94.8, we committed (HBASE-8354, which is backport of HBASE-7878) and later increased the sleep time to 4 secs in HBASE-8389. However, the proper solution arrived in trunk in HBASE-8449 which uses a backoff sleep policy + isFileClosed() api. We should backport this patch to 0.94 as well. isFileClosed() is released in Hadoop 1.2.0 (HDFS-4774) and 2.0.5(HDFS-4525). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8670) [0.94] Backport HBASE-8449 and HBASE-8204 to 0.94 (Refactor recoverLease retries and pauses)
[ https://issues.apache.org/jira/browse/HBASE-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-8670: - Attachment: hbase-8670_v1.patch Attaching a patch which backports HBASE-8449 and HBASE-8204. Running the tests on 0.94 now. [0.94] Backport HBASE-8449 and HBASE-8204 to 0.94 (Refactor recoverLease retries and pauses) Key: HBASE-8670 URL: https://issues.apache.org/jira/browse/HBASE-8670 Project: HBase Issue Type: Bug Components: Filesystem Integration, master, wal Reporter: Enis Soztutar Assignee: Enis Soztutar Fix For: 0.94.9 Attachments: hbase-8670_v1.patch Some history: Up until 0.94.8, Hbase did not check the result of recoverLease() call, but things kind of worked since we are checking for 0-length files in distributed log split tasks from region servers. If lease recovery is not finished, the log file will report 0 length, and the task will fail, and master will then re-call recoverLease() and reassign the task. This scheme might fail for log files that are larger than 1 hdfs block though. In 0.94.8, we committed (HBASE-8354, which is backport of HBASE-7878) and later increased the sleep time to 4 secs in HBASE-8389. However, the proper solution arrived in trunk in HBASE-8449 which uses a backoff sleep policy + isFileClosed() api. We should backport this patch to 0.94 as well. isFileClosed() is released in Hadoop 1.2.0 (HDFS-4774) and 2.0.5(HDFS-4525). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671938#comment-13671938 ] Julian Zhou commented on HBASE-8642: Table rename is via cloning snapshot currently (so we rename the table this way, pairwisely we update all mapped snapshots?)? How about the claiming is that listing/deleting snapshot by table name is based on current table name available? For shorter cmd, how about delete_table_snapshot/list_table_snapshot? [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch Support list and delete snapshot by table name. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671940#comment-13671940 ] Julian Zhou commented on HBASE-8642: Yeah, RDBMS has object (database, tablespace, table, column, etc.) id in catalog, which make external name changable and transparent to system. Is it possible to using id (shorter) instead of original name bytes in ROOT/META? Just a question, this involve lots of changes to current HBase. [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch Support list and delete snapshot by table name. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671944#comment-13671944 ] Matteo Bertozzi commented on HBASE-8642: {quote}Table rename is via cloning snapshot currently (so we rename the table this way, pairwisely we update all mapped snapshots?)?{quote} No we don't update the other snapshots info... so they have the old table name... {quote}How about the claiming is that listing/deleting snapshot by table name is based on current table name available?{quote} I don't really understand this one... at the moment if you list by table name, you get also the snapshots of the renamed table. {quote}Yeah, RDBMS has object (database, tablespace, table, column, etc.) id in catalog, which make external name changable and transparent to system. Is it possible to using id (shorter) instead of original name bytes in ROOT/META? Just a question, this involve lots of changes to current HBase.{quote} Take a look at the last slide of HBASE-7806 pdf [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch Support list and delete snapshot by table name. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table
[ https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671979#comment-13671979 ] Ted Yu commented on HBASE-8642: --- bq. how about delete_table_snapshot/list_table_snapshot? I think delete_table_snapshots/list_table_snapshots are good. [Snapshot] List and delete snapshot by table Key: HBASE-8642 URL: https://issues.apache.org/jira/browse/HBASE-8642 Project: HBase Issue Type: Improvement Components: snapshots Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Reporter: Julian Zhou Assignee: Julian Zhou Priority: Minor Fix For: 0.98.0, 0.95.0, 0.95.1, 0.95.2 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch Support list and delete snapshot by table name. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8668) TestHLogSplit.generateHLog() does not use local variables for entries
[ https://issues.apache.org/jira/browse/HBASE-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13671980#comment-13671980 ] Hudson commented on HBASE-8668: --- Integrated in hbase-0.95 #223 (See [https://builds.apache.org/job/hbase-0.95/223/]) HBASE-8668 - TestHLogSplit.generateHLog() does not use local variables for entries (Ram) (Revision 1488314) Result = FAILURE ramkrishna : Files : * /hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java TestHLogSplit.generateHLog() does not use local variables for entries - Key: HBASE-8668 URL: https://issues.apache.org/jira/browse/HBASE-8668 Project: HBase Issue Type: Test Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Trivial Fix For: 0.98.0, 0.95.1 Attachments: HBASE-8668.patch {code} private HLog.Writer [] generateHLogs(final int writers, final int entries, final int leaveOpen) throws IOException { return generateHLogs((DistributedFileSystem)this.fs, writers, ENTRIES, leaveOpen); } {code} Here we should use local variable entries instead of ENTRIES. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8503) Backport hbase-8483 HConnectionManager can leak ZooKeeper connections when using deleteStaleConnection to 0.94
[ https://issues.apache.org/jira/browse/HBASE-8503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-8503: - Fix Version/s: (was: 0.94.8) Backport hbase-8483 HConnectionManager can leak ZooKeeper connections when using deleteStaleConnection to 0.94 Key: HBASE-8503 URL: https://issues.apache.org/jira/browse/HBASE-8503 Project: HBase Issue Type: Bug Reporter: stack Assignee: Lars Hofhansl See hbase-8483 for patch. Assigning LarsH for his consideration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira