[jira] [Commented] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
[ https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878386#comment-13878386 ] Vladimir Rodionov commented on HBASE-7404: -- Yes, keys needs to be off-heaped as well to allow scaling well beyond 100G, real-time eviction is better (but harder) for latency sensitive applications, SSD-friendliness means seq writes and *much* lower latency variations as opposed to random writes worse access consistency, yes - L1/L2 layout is what I am thinking about, [~stack]. Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE -- Key: HBASE-7404 URL: https://issues.apache.org/jira/browse/HBASE-7404 Project: HBase Issue Type: New Feature Affects Versions: 0.94.3 Reporter: chunhui shen Assignee: chunhui shen Fix For: 0.95.0 Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, HBASE-7404-backport-0.94.patch, Introduction of Bucket Cache.pdf, hbase-7404-94v2.patch, hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch First, thanks @neil from Fusion-IO share the source code. Usage: 1.Use bucket cache as main memory cache, configured as the following: –hbase.bucketcache.ioengine heap (or offheap if using offheap memory to cache block ) –hbase.bucketcache.size 0.4 (size for bucket cache, 0.4 is a percentage of max heap size) 2.Use bucket cache as a secondary cache, configured as the following: –hbase.bucketcache.ioengine file:/disk1/hbase/cache.data(The file path where to store the block data) –hbase.bucketcache.size 1024 (size for bucket cache, unit is MB, so 1024 means 1GB) –hbase.bucketcache.combinedcache.enabled false (default value being true) See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and org.apache.hadoop.hbase.io.hfile.bucket.BucketCache What's Bucket Cache? It could greatly decrease CMS and heap fragment by GC It support a large cache space for High Read Performance by using high speed disk like Fusion-io 1.An implementation of block cache like LruBlockCache 2.Self manage blocks' storage position through Bucket Allocator 3.The cached blocks could be stored in the memory or file system 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), combined with LruBlockCache to decrease CMS and fragment by GC. 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to store block) to enlarge cache space How about SlabCache? We have studied and test SlabCache first, but the result is bad, because: 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds of block size, especially using DataBlockEncoding 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it causes CMS and heap fragment don't get any better 3.Direct heap performance is not good as heap, and maybe cause OOM, so we recommend using heap engine See more in the attachment and in the patch -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
cuijianwei created HBASE-10395: -- Summary: endTime won't be set in VerifyReplication if startTime is not set Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Priority: Minor In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cuijianwei updated HBASE-10395: --- Attachment: HBASE-10395-0.94-v1.patch This patch will set endTime and use endTime as the argument name in help message. endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10380) Add bytesBinary and filter options to CopyTable
[ https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878402#comment-13878402 ] Ishan Chhabra commented on HBASE-10380: --- Sure. I didn't know about ParseFiler. I tried to build my own textual language initially, but it became complicated quickly. Add bytesBinary and filter options to CopyTable --- Key: HBASE-10380 URL: https://issues.apache.org/jira/browse/HBASE-10380 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Ishan Chhabra Assignee: Ishan Chhabra Priority: Minor Attachments: HBASE_10380_0.94-v1.patch Add options in CopyTable to: 1. Specify the start and stop row in bytesBinary format 2. Use filters -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liang Xie updated HBASE-10395: -- Assignee: cuijianwei Status: Patch Available (was: Open) endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10396) The constructor of HBaseAdmin may close the shared HConnection
cuijianwei created HBASE-10396: -- Summary: The constructor of HBaseAdmin may close the shared HConnection Key: HBASE-10396 URL: https://issues.apache.org/jira/browse/HBASE-10396 Project: HBase Issue Type: Bug Components: Admin, Client Affects Versions: 0.94.16 Reporter: cuijianwei HBaseAdmin has the constructor: {code} public HBaseAdmin(Configuration c) throws MasterNotRunningException, ZooKeeperConnectionException { this.conf = HBaseConfiguration.create(c); this.connection = HConnectionManager.getConnection(this.conf); ... {code} As shown in above code, HBaseAdmin will get a cached HConnection or create a new HConnection and use this HConnection to connect to Master. Then, HBaseAdmin will delete the HConnection when connecting to master fail as follows: {code} while ( true ){ try { this.connection.getMaster(); return; } catch (MasterNotRunningException mnre) { HConnectionManager.deleteStaleConnection(this.connection); this.connection = HConnectionManager.getConnection(this.conf); } {code} The above code will invoke HConnectionManager#deleteStaleConnection to delete the HConnection from global HConnection cache. The risk is that the deleted HConnection might be sharing by other threads, such as HTable or HTablePool. Then, these threads which sharing the deleted HConnection will get closed HConnection exception: {code} org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61bc59aa closed {code} If users use HTablePool, the situation will become worse because closing HTable will only return HTable into HTablePool which won't reduce the reference count of the closed HConnection. Then, the closed HConnection will always be used before clearing HTablePool. In 0.94, some modules such as Rest server are using HTablePool, therefore may suffer from this problem. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878418#comment-13878418 ] Hadoop QA commented on HBASE-10395: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624298/HBASE-10395-0.94-v1.patch against trunk revision . ATTACHMENT ID: 12624298 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8493//console This message is automatically generated. endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10396) The constructor of HBaseAdmin may close the shared HConnection
[ https://issues.apache.org/jira/browse/HBASE-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cuijianwei updated HBASE-10396: --- Description: HBaseAdmin has the constructor: {code} public HBaseAdmin(Configuration c) throws MasterNotRunningException, ZooKeeperConnectionException { this.conf = HBaseConfiguration.create(c); this.connection = HConnectionManager.getConnection(this.conf); ... {code} As shown in above code, HBaseAdmin will get a cached HConnection or create a new HConnection and use this HConnection to connect to Master. Then, HBaseAdmin will delete the HConnection when connecting to master fail as follows: {code} while ( true ){ try { this.connection.getMaster(); return; } catch (MasterNotRunningException mnre) { HConnectionManager.deleteStaleConnection(this.connection); this.connection = HConnectionManager.getConnection(this.conf); } {code} The above code will invoke HConnectionManager#deleteStaleConnection to delete the HConnection from global HConnection cache. The risk is that the deleted HConnection might be sharing by other threads, such as HTable or HTablePool. Then, these threads which sharing the deleted HConnection will get closed HConnection exception: {code} org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61bc59aa closed {code} If users use HTablePool, the situation will become worse because closing HTable will only return HTable to HTablePool which won't reduce the reference count of the closed HConnection. Then, the closed HConnection will always be used before clearing HTablePool. In 0.94, some modules such as Rest server are using HTablePool, therefore may suffer from this problem. was: HBaseAdmin has the constructor: {code} public HBaseAdmin(Configuration c) throws MasterNotRunningException, ZooKeeperConnectionException { this.conf = HBaseConfiguration.create(c); this.connection = HConnectionManager.getConnection(this.conf); ... {code} As shown in above code, HBaseAdmin will get a cached HConnection or create a new HConnection and use this HConnection to connect to Master. Then, HBaseAdmin will delete the HConnection when connecting to master fail as follows: {code} while ( true ){ try { this.connection.getMaster(); return; } catch (MasterNotRunningException mnre) { HConnectionManager.deleteStaleConnection(this.connection); this.connection = HConnectionManager.getConnection(this.conf); } {code} The above code will invoke HConnectionManager#deleteStaleConnection to delete the HConnection from global HConnection cache. The risk is that the deleted HConnection might be sharing by other threads, such as HTable or HTablePool. Then, these threads which sharing the deleted HConnection will get closed HConnection exception: {code} org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61bc59aa closed {code} If users use HTablePool, the situation will become worse because closing HTable will only return HTable into HTablePool which won't reduce the reference count of the closed HConnection. Then, the closed HConnection will always be used before clearing HTablePool. In 0.94, some modules such as Rest server are using HTablePool, therefore may suffer from this problem. The constructor of HBaseAdmin may close the shared HConnection --- Key: HBASE-10396 URL: https://issues.apache.org/jira/browse/HBASE-10396 Project: HBase Issue Type: Bug Components: Admin, Client Affects Versions: 0.94.16 Reporter: cuijianwei HBaseAdmin has the constructor: {code} public HBaseAdmin(Configuration c) throws MasterNotRunningException, ZooKeeperConnectionException { this.conf = HBaseConfiguration.create(c); this.connection = HConnectionManager.getConnection(this.conf); ... {code} As shown in above code, HBaseAdmin will get a cached HConnection or create a new HConnection and use this HConnection to connect to Master. Then, HBaseAdmin will delete the HConnection when connecting to master fail as follows: {code} while ( true ){ try { this.connection.getMaster(); return; } catch (MasterNotRunningException mnre) { HConnectionManager.deleteStaleConnection(this.connection); this.connection = HConnectionManager.getConnection(this.conf); } {code} The above code will invoke HConnectionManager#deleteStaleConnection to delete the HConnection from global HConnection cache. The risk is that the deleted HConnection might be sharing by other threads, such as HTable or HTablePool. Then, these threads which sharing the deleted HConnection will get closed HConnection exception: {code}
[jira] [Commented] (HBASE-10396) The constructor of HBaseAdmin may close the shared HConnection
[ https://issues.apache.org/jira/browse/HBASE-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878423#comment-13878423 ] cuijianwei commented on HBASE-10396: To alleviate this problem, we might invoke HConnection#close when connecting to Master fail. However, this won't close the HConnection immediately if the HConnection is sharing by other threads. Then, the same HConnection will be got in the next retry. The constructor of HBaseAdmin may close the shared HConnection --- Key: HBASE-10396 URL: https://issues.apache.org/jira/browse/HBASE-10396 Project: HBase Issue Type: Bug Components: Admin, Client Affects Versions: 0.94.16 Reporter: cuijianwei HBaseAdmin has the constructor: {code} public HBaseAdmin(Configuration c) throws MasterNotRunningException, ZooKeeperConnectionException { this.conf = HBaseConfiguration.create(c); this.connection = HConnectionManager.getConnection(this.conf); ... {code} As shown in above code, HBaseAdmin will get a cached HConnection or create a new HConnection and use this HConnection to connect to Master. Then, HBaseAdmin will delete the HConnection when connecting to master fail as follows: {code} while ( true ){ try { this.connection.getMaster(); return; } catch (MasterNotRunningException mnre) { HConnectionManager.deleteStaleConnection(this.connection); this.connection = HConnectionManager.getConnection(this.conf); } {code} The above code will invoke HConnectionManager#deleteStaleConnection to delete the HConnection from global HConnection cache. The risk is that the deleted HConnection might be sharing by other threads, such as HTable or HTablePool. Then, these threads which sharing the deleted HConnection will get closed HConnection exception: {code} org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61bc59aa closed {code} If users use HTablePool, the situation will become worse because closing HTable will only return HTable to HTablePool which won't reduce the reference count of the closed HConnection. Then, the closed HConnection will always be used before clearing HTablePool. In 0.94, some modules such as Rest server are using HTablePool, therefore may suffer from this problem. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cuijianwei updated HBASE-10395: --- Attachment: HBASE-10395-trunk-v1.patch attach patch for trunk endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-trunk-v1.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10396) The constructor of HBaseAdmin may close the shared HConnection
[ https://issues.apache.org/jira/browse/HBASE-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878436#comment-13878436 ] chunhui shen commented on HBASE-10396: -- Trunk seems fixed this problem. Make a backport? The constructor of HBaseAdmin may close the shared HConnection --- Key: HBASE-10396 URL: https://issues.apache.org/jira/browse/HBASE-10396 Project: HBase Issue Type: Bug Components: Admin, Client Affects Versions: 0.94.16 Reporter: cuijianwei HBaseAdmin has the constructor: {code} public HBaseAdmin(Configuration c) throws MasterNotRunningException, ZooKeeperConnectionException { this.conf = HBaseConfiguration.create(c); this.connection = HConnectionManager.getConnection(this.conf); ... {code} As shown in above code, HBaseAdmin will get a cached HConnection or create a new HConnection and use this HConnection to connect to Master. Then, HBaseAdmin will delete the HConnection when connecting to master fail as follows: {code} while ( true ){ try { this.connection.getMaster(); return; } catch (MasterNotRunningException mnre) { HConnectionManager.deleteStaleConnection(this.connection); this.connection = HConnectionManager.getConnection(this.conf); } {code} The above code will invoke HConnectionManager#deleteStaleConnection to delete the HConnection from global HConnection cache. The risk is that the deleted HConnection might be sharing by other threads, such as HTable or HTablePool. Then, these threads which sharing the deleted HConnection will get closed HConnection exception: {code} org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61bc59aa closed {code} If users use HTablePool, the situation will become worse because closing HTable will only return HTable to HTablePool which won't reduce the reference count of the closed HConnection. Then, the closed HConnection will always be used before clearing HTablePool. In 0.94, some modules such as Rest server are using HTablePool, therefore may suffer from this problem. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10394) Test for Replication with tags
[ https://issues.apache.org/jira/browse/HBASE-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878443#comment-13878443 ] Hadoop QA commented on HBASE-10394: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624291/HBASE-10394.patch against trunk revision . ATTACHMENT ID: 12624291 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8491//console This message is automatically generated. Test for Replication with tags -- Key: HBASE-10394 URL: https://issues.apache.org/jira/browse/HBASE-10394 Project: HBase Issue Type: Test Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Attachments: HBASE-10394.patch Followup task for HBASE-10322 for adding a test to assert Replication works well and replicate cells with Tags when tags are being used. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cuijianwei updated HBASE-10395: --- Attachment: HBASE-10395-trunk-v2.patch HBASE-10395-0.94-v2.patch set endTime in replication Scan endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] cuijianwei updated HBASE-10395: --- Attachment: HBASE-10395-trunk-v3.patch HBASE-10395-0.94-v3.patch polish endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-0.94-v3.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch, HBASE-10395-trunk-v3.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads
[ https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878469#comment-13878469 ] Hudson commented on HBASE-10322: SUCCESS: Integrated in HBase-TRUNK #4847 (See [https://builds.apache.org/job/HBase-TRUNK/4847/]) HBASE-10322 Strip tags from KV while sending back to client on reads. (anoopsamjohn: rev 1560265) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionKey.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecV2.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java * /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecV2.java * /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecWithTags.java * /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodecWithTags.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java Strip tags from KV while sending back to client on reads Key: HBASE-10322 URL: https://issues.apache.org/jira/browse/HBASE-10322 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Blocker Fix For: 0.98.0, 0.99.0 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, HBASE-10322_V6.patch, HBASE-10322_codec.patch Right now we have some inconsistency wrt sending back tags on read. We do this in scan when using Java client(Codec based cell block encoding). But during a Get operation or when a pure PB based Scan comes we are not sending back the tags. So any of the below fix we have to do 1. Send back tags in missing cases also. But sending back visibility expression/ cell ACL is not correct. 2. Don't send back tags in any case. This will a problem when a tool like ExportTool use the scan to export the table data. We will miss exporting the cell visibility/ACL. 3. Send back tags based on some condition. It has to be per scan basis. Simplest way is pass some kind of attribute in Scan which says whether to send back tags or not. But believing some thing what scan specifies might not be correct IMO. Then comes the way of checking the user who is doing the scan. When a HBase super user doing the scan then only send back tags. So when a case comes like Export Tool's the execution should happen from a super user. So IMO we should go with #3. Patch coming soon. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10396) The constructor of HBaseAdmin may close the shared HConnection
[ https://issues.apache.org/jira/browse/HBASE-10396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878481#comment-13878481 ] cuijianwei commented on HBASE-10396: Thanks for your comment [~zjushch], I go through the code of HBaseAdmin in trunk. The HConnection will be closed in HBaseAdmin#close so that fixed the problem. The code of HBaseAdmin changed a lot between 0.94 and trunk, will we supply a patch for 0.94 to fix this problem? The constructor of HBaseAdmin may close the shared HConnection --- Key: HBASE-10396 URL: https://issues.apache.org/jira/browse/HBASE-10396 Project: HBase Issue Type: Bug Components: Admin, Client Affects Versions: 0.94.16 Reporter: cuijianwei HBaseAdmin has the constructor: {code} public HBaseAdmin(Configuration c) throws MasterNotRunningException, ZooKeeperConnectionException { this.conf = HBaseConfiguration.create(c); this.connection = HConnectionManager.getConnection(this.conf); ... {code} As shown in above code, HBaseAdmin will get a cached HConnection or create a new HConnection and use this HConnection to connect to Master. Then, HBaseAdmin will delete the HConnection when connecting to master fail as follows: {code} while ( true ){ try { this.connection.getMaster(); return; } catch (MasterNotRunningException mnre) { HConnectionManager.deleteStaleConnection(this.connection); this.connection = HConnectionManager.getConnection(this.conf); } {code} The above code will invoke HConnectionManager#deleteStaleConnection to delete the HConnection from global HConnection cache. The risk is that the deleted HConnection might be sharing by other threads, such as HTable or HTablePool. Then, these threads which sharing the deleted HConnection will get closed HConnection exception: {code} org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@61bc59aa closed {code} If users use HTablePool, the situation will become worse because closing HTable will only return HTable to HTablePool which won't reduce the reference count of the closed HConnection. Then, the closed HConnection will always be used before clearing HTablePool. In 0.94, some modules such as Rest server are using HTablePool, therefore may suffer from this problem. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
[ https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878498#comment-13878498 ] Liang Xie commented on HBASE-7404: -- bq. is your patch the same as the one posted here by Dave Latham? If not, mind adding refreshed patch? our ported stuff was against internal 0.94.3 branch, but i guess there should be no difference with Dave's, since most of the 7404's changes are new files:) bq. If you had some performance numbers that would be great too. we decided to port since the biggest latency contributor in our several clusters is gc, after porting this jira and with lots of vm tuning, the total gc cost each day decreased from [2000,3000]s to [300,500]s, then the top contributor of 99th percentile latency isn't gc any more:) i think the ported stuff should contribute about [200,400]ms reduction probably at least, i have forgot the detail number, several months ago, you know:) I agree that if we don't have a gc trouble, then no benefit from here, unless we want to run on a fast flash, it's not my scenario. Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE -- Key: HBASE-7404 URL: https://issues.apache.org/jira/browse/HBASE-7404 Project: HBase Issue Type: New Feature Affects Versions: 0.94.3 Reporter: chunhui shen Assignee: chunhui shen Fix For: 0.95.0 Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, HBASE-7404-backport-0.94.patch, Introduction of Bucket Cache.pdf, hbase-7404-94v2.patch, hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch First, thanks @neil from Fusion-IO share the source code. Usage: 1.Use bucket cache as main memory cache, configured as the following: –hbase.bucketcache.ioengine heap (or offheap if using offheap memory to cache block ) –hbase.bucketcache.size 0.4 (size for bucket cache, 0.4 is a percentage of max heap size) 2.Use bucket cache as a secondary cache, configured as the following: –hbase.bucketcache.ioengine file:/disk1/hbase/cache.data(The file path where to store the block data) –hbase.bucketcache.size 1024 (size for bucket cache, unit is MB, so 1024 means 1GB) –hbase.bucketcache.combinedcache.enabled false (default value being true) See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and org.apache.hadoop.hbase.io.hfile.bucket.BucketCache What's Bucket Cache? It could greatly decrease CMS and heap fragment by GC It support a large cache space for High Read Performance by using high speed disk like Fusion-io 1.An implementation of block cache like LruBlockCache 2.Self manage blocks' storage position through Bucket Allocator 3.The cached blocks could be stored in the memory or file system 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), combined with LruBlockCache to decrease CMS and fragment by GC. 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to store block) to enlarge cache space How about SlabCache? We have studied and test SlabCache first, but the result is bad, because: 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds of block size, especially using DataBlockEncoding 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it causes CMS and heap fragment don't get any better 3.Direct heap performance is not good as heap, and maybe cause OOM, so we recommend using heap engine See more in the attachment and in the patch -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10277) refactor AsyncProcess
[ https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878494#comment-13878494 ] Nicolas Liochon commented on HBASE-10277: - For #3, flushCommit may be not called before close time. The write buffer manage the size of the message for the user, it allows him to stream its writes to the HTable. The issue here is really the error management., the feature itself is nice when everything works properly. I would propose an option #4: add a callback for error management. If the callback is set, we use it. If not, we raise an exception as we used to do. We could as well stream the gets/increments as the puts, and use a callback to return the result as well. This would save the creation of the Object[], and it would make the interface consistent. The code itself is already there imho. We can consider that changing the HTable semantic is for another jira btw, as you like. refactor AsyncProcess - Key: HBASE-10277 URL: https://issues.apache.org/jira/browse/HBASE-10277 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HBASE-10277.01.patch, HBASE-10277.patch AsyncProcess currently has two patterns of usage, one from HTable flush w/o callback and with reuse, and one from HCM/HTable batch call, with callback and w/o reuse. In the former case (but not the latter), it also does some throttling of actions on initial submit call, limiting the number of outstanding actions per server. The latter case is relatively straightforward. The former appears to be error prone due to reuse - if, as javadoc claims should be safe, multiple submit calls are performed without waiting for the async part of the previous call to finish, fields like hasError become ambiguous and can be used for the wrong call; callback for success/failure is called based on original index of an action in submitted list, but with only one callback supplied to AP in ctor it's not clear to which submit call the index belongs, if several are outstanding. I was going to add support for HBASE-10070 to AP, and found that it might be difficult to do cleanly. It would be nice to normalize AP usage patterns; in particular, separate the global part (load tracking) from per-submit-call part. Per-submit part can more conveniently track stuff like initialActions, mapping of indexes and retry information, that is currently passed around the method calls. -I am not sure yet, but maybe sending of the original index to server in ClientProtos.MultiAction can also be avoided.- Cannot be avoided because the API to server doesn't have one-to-one correspondence between requests and responses in an individual call to multi (retries/rearrangement have nothing to do with it) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads
[ https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878506#comment-13878506 ] Hudson commented on HBASE-10322: SUCCESS: Integrated in HBase-0.98 #101 (See [https://builds.apache.org/job/HBase-0.98/101/]) HBASE-10322 Strip tags from KV while sending back to client on reads. (anoopsamjohn: rev 1560266) * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionKey.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecV2.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecV2.java * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecWithTags.java * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodecWithTags.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java Strip tags from KV while sending back to client on reads Key: HBASE-10322 URL: https://issues.apache.org/jira/browse/HBASE-10322 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Blocker Fix For: 0.98.0, 0.99.0 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, HBASE-10322_V6.patch, HBASE-10322_codec.patch Right now we have some inconsistency wrt sending back tags on read. We do this in scan when using Java client(Codec based cell block encoding). But during a Get operation or when a pure PB based Scan comes we are not sending back the tags. So any of the below fix we have to do 1. Send back tags in missing cases also. But sending back visibility expression/ cell ACL is not correct. 2. Don't send back tags in any case. This will a problem when a tool like ExportTool use the scan to export the table data. We will miss exporting the cell visibility/ACL. 3. Send back tags based on some condition. It has to be per scan basis. Simplest way is pass some kind of attribute in Scan which says whether to send back tags or not. But believing some thing what scan specifies might not be correct IMO. Then comes the way of checking the user who is doing the scan. When a HBase super user doing the scan then only send back tags. So when a case comes like Export Tool's the execution should happen from a super user. So IMO we should go with #3. Patch coming soon. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878527#comment-13878527 ] Hadoop QA commented on HBASE-10395: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624315/HBASE-10395-trunk-v3.patch against trunk revision . ATTACHMENT ID: 12624315 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.testHBase3583(TestRegionObserverInterface.java:244) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8494//console This message is automatically generated. endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-0.94-v3.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch, HBASE-10395-trunk-v3.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) {
[jira] [Commented] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878525#comment-13878525 ] Hadoop QA commented on HBASE-10395: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624315/HBASE-10395-trunk-v3.patch against trunk revision . ATTACHMENT ID: 12624315 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8495//console This message is automatically generated. endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-0.94-v3.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch, HBASE-10395-trunk-v3.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-10397: --- Description: Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10397) Fix findbugs introduced from HBASE-9426
Anoop Sam John created HBASE-10397: -- Summary: Fix findbugs introduced from HBASE-9426 Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-10397: --- Attachment: HBASE-10397.patch Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10397.patch Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-10397: --- Fix Version/s: 0.99.0 Affects Version/s: 0.99.0 Status: Patch Available (was: Open) Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10397.patch Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-9740) A corrupt HFile could cause endless attempts to assign the region without a chance of success
[ https://issues.apache.org/jira/browse/HBASE-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878557#comment-13878557 ] Aditya Kishore commented on HBASE-9740: --- @[~Tom_play] Thanks for the patch. Two minor observations. 1. Please use 2 spaces instead of TAB for indentation. 2. The Submit Patch feature in the JIRA only works with the trunk (not a concern with this patch itself). [~ram_krish] , [~jxiang] Mind taking a look? A corrupt HFile could cause endless attempts to assign the region without a chance of success - Key: HBASE-9740 URL: https://issues.apache.org/jira/browse/HBASE-9740 Project: HBase Issue Type: Bug Affects Versions: 0.94.16 Reporter: Aditya Kishore Assignee: Aditya Kishore Attachments: patch-9740_0.94.txt As described in HBASE-9737, a corrupt HFile in a region could lead to an assignment storm in the cluster since the Master will keep trying to assign the region to each region server one after another and obviously none will succeed. The region server, upon detecting such a scenario should mark the region as RS_ZK_REGION_FAILED_ERROR (or something to the effect) in the Zookeeper which should indicate the Master to stop assigning the region until the error has been resolved (via an HBase shell command, probably assign?) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads
[ https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878559#comment-13878559 ] Hudson commented on HBASE-10322: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #95 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/95/]) HBASE-10322 Strip tags from KV while sending back to client on reads. (anoopsamjohn: rev 1560266) * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionKey.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java * /hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecV2.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecV2.java * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecWithTags.java * /hbase/branches/0.98/hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodecWithTags.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java Strip tags from KV while sending back to client on reads Key: HBASE-10322 URL: https://issues.apache.org/jira/browse/HBASE-10322 Project: HBase Issue Type: Bug Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Blocker Fix For: 0.98.0, 0.99.0 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, HBASE-10322_V3.patch, HBASE-10322_V4.patch, HBASE-10322_V5.patch, HBASE-10322_V6.patch, HBASE-10322_codec.patch Right now we have some inconsistency wrt sending back tags on read. We do this in scan when using Java client(Codec based cell block encoding). But during a Get operation or when a pure PB based Scan comes we are not sending back the tags. So any of the below fix we have to do 1. Send back tags in missing cases also. But sending back visibility expression/ cell ACL is not correct. 2. Don't send back tags in any case. This will a problem when a tool like ExportTool use the scan to export the table data. We will miss exporting the cell visibility/ACL. 3. Send back tags based on some condition. It has to be per scan basis. Simplest way is pass some kind of attribute in Scan which says whether to send back tags or not. But believing some thing what scan specifies might not be correct IMO. Then comes the way of checking the user who is doing the scan. When a HBase super user doing the scan then only send back tags. So when a case comes like Export Tool's the execution should happen from a super user. So IMO we should go with #3. Patch coming soon. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code
[ https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Liochon resolved HBASE-10375. - Resolution: Fixed Fix Version/s: 0.99.0 0.96.2 0.98.0 Hadoop Flags: Reviewed hbase-default.xml hbase.status.multicast.address.port does not match code - Key: HBASE-10375 URL: https://issues.apache.org/jira/browse/HBASE-10375 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 10375.v2.96-98.patch, 10375.v2.trunk.patch In hbase-default.xml {code} + property +namehbase.status.multicast.address.port/name +value6100/value +description + Multicast port to use for the status publication by multicast. +/description + /property {code} In HConstants it was 60100. {code} public static final String STATUS_MULTICAST_PORT = hbase.status.multicast.port; public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100; {code} (it was 60100 in the code for 0.96 and 0.98.) I lean towards going with the code as opposed to the config file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code
[ https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878567#comment-13878567 ] Nicolas Liochon commented on HBASE-10375: - Committed to .99, .98, .96. Thanks, lads. hbase-default.xml hbase.status.multicast.address.port does not match code - Key: HBASE-10375 URL: https://issues.apache.org/jira/browse/HBASE-10375 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 10375.v2.96-98.patch, 10375.v2.trunk.patch In hbase-default.xml {code} + property +namehbase.status.multicast.address.port/name +value6100/value +description + Multicast port to use for the status publication by multicast. +/description + /property {code} In HConstants it was 60100. {code} public static final String STATUS_MULTICAST_PORT = hbase.status.multicast.port; public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100; {code} (it was 60100 in the code for 0.96 and 0.98.) I lean towards going with the code as opposed to the config file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878607#comment-13878607 ] Hadoop QA commented on HBASE-10397: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624325/HBASE-10397.patch against trunk revision . ATTACHMENT ID: 12624325 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8496//console This message is automatically generated. Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10397.patch Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878617#comment-13878617 ] Ted Yu commented on HBASE-10397: +1 Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10397.patch Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code
[ https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878651#comment-13878651 ] Hudson commented on HBASE-10375: SUCCESS: Integrated in HBase-TRUNK #4848 (See [https://builds.apache.org/job/HBase-TRUNK/4848/]) HBASE-10375 hbase-default.xml hbase.status.multicast.address.port does not match code (nkeywal: rev 1560319) * /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/trunk/hbase-common/src/main/resources/hbase-default.xml hbase-default.xml hbase.status.multicast.address.port does not match code - Key: HBASE-10375 URL: https://issues.apache.org/jira/browse/HBASE-10375 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 10375.v2.96-98.patch, 10375.v2.trunk.patch In hbase-default.xml {code} + property +namehbase.status.multicast.address.port/name +value6100/value +description + Multicast port to use for the status publication by multicast. +/description + /property {code} In HConstants it was 60100. {code} public static final String STATUS_MULTICAST_PORT = hbase.status.multicast.port; public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100; {code} (it was 60100 in the code for 0.96 and 0.98.) I lean towards going with the code as opposed to the config file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code
[ https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878735#comment-13878735 ] Hudson commented on HBASE-10375: FAILURE: Integrated in hbase-0.96 #267 (See [https://builds.apache.org/job/hbase-0.96/267/]) HBASE-10375 hbase-default.xml hbase.status.multicast.address.port does not match code (nkeywal: rev 1560322) * /hbase/branches/0.96/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.96/hbase-common/src/main/resources/hbase-default.xml hbase-default.xml hbase.status.multicast.address.port does not match code - Key: HBASE-10375 URL: https://issues.apache.org/jira/browse/HBASE-10375 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 10375.v2.96-98.patch, 10375.v2.trunk.patch In hbase-default.xml {code} + property +namehbase.status.multicast.address.port/name +value6100/value +description + Multicast port to use for the status publication by multicast. +/description + /property {code} In HConstants it was 60100. {code} public static final String STATUS_MULTICAST_PORT = hbase.status.multicast.port; public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100; {code} (it was 60100 in the code for 0.96 and 0.98.) I lean towards going with the code as opposed to the config file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code
[ https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878748#comment-13878748 ] Hudson commented on HBASE-10375: SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #96 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/96/]) HBASE-10375 hbase-default.xml hbase.status.multicast.address.port does not match code (nkeywal: rev 1560320) * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.98/hbase-common/src/main/resources/hbase-default.xml hbase-default.xml hbase.status.multicast.address.port does not match code - Key: HBASE-10375 URL: https://issues.apache.org/jira/browse/HBASE-10375 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 10375.v2.96-98.patch, 10375.v2.trunk.patch In hbase-default.xml {code} + property +namehbase.status.multicast.address.port/name +value6100/value +description + Multicast port to use for the status publication by multicast. +/description + /property {code} In HConstants it was 60100. {code} public static final String STATUS_MULTICAST_PORT = hbase.status.multicast.port; public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100; {code} (it was 60100 in the code for 0.96 and 0.98.) I lean towards going with the code as opposed to the config file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-9740) A corrupt HFile could cause endless attempts to assign the region without a chance of success
[ https://issues.apache.org/jira/browse/HBASE-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878779#comment-13878779 ] Jimmy Xiang commented on HBASE-9740: In trunk, we move the region to FAILED_OPEN state in such a case. If it's offline, it will be hard to notice this issue, right? A corrupt HFile could cause endless attempts to assign the region without a chance of success - Key: HBASE-9740 URL: https://issues.apache.org/jira/browse/HBASE-9740 Project: HBase Issue Type: Bug Affects Versions: 0.94.16 Reporter: Aditya Kishore Assignee: Aditya Kishore Attachments: patch-9740_0.94.txt As described in HBASE-9737, a corrupt HFile in a region could lead to an assignment storm in the cluster since the Master will keep trying to assign the region to each region server one after another and obviously none will succeed. The region server, upon detecting such a scenario should mark the region as RS_ZK_REGION_FAILED_ERROR (or something to the effect) in the Zookeeper which should indicate the Master to stop assigning the region until the error has been resolved (via an HBase shell command, probably assign?) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10398) HBase book updates for Replication after HBASE-10322
Anoop Sam John created HBASE-10398: -- Summary: HBase book updates for Replication after HBASE-10322 Key: HBASE-10398 URL: https://issues.apache.org/jira/browse/HBASE-10398 Project: HBase Issue Type: Task Components: documentation Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878833#comment-13878833 ] Ted Yu commented on HBASE-10336: Starting HBase 0.98 on hadoop built from hadoop-2 branch gave us: {code} 2014-01-21 06:07:17,871 FATAL [master:h2-centos6-uns-1390276854-hbase-10:6] master.HMaster: Unhandled exception. Starting shutdown. java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92) at org.apache.hadoop.conf.Configuration.set(Configuration.java:958) at org.apache.hadoop.conf.Configuration.set(Configuration.java:940) at org.apache.hadoop.http.HttpServer.initializeWebServer(HttpServer.java:510) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:470) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:458) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:412) at org.apache.hadoop.hbase.util.InfoServer.init(InfoServer.java:59) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:584) {code} Looks like this JIRA would get us pass the above issue. [~apurtell]: Can you consider this for 0.98.0 ? Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878827#comment-13878827 ] Nick Dimiduk commented on HBASE-10392: -- Ah you're right. I missed the occurrence in MemStoreFlusher. Currently it respects both the old and new parameter. Maybe we could continue respecting new or old but print a warning if the old is used? I'll update the patch to do that and we'll see what folks think. Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878837#comment-13878837 ] Anoop Sam John commented on HBASE-10392: In fact this check is duplicated at 2 places now. But HeapMemoryManager will get initialized only when the auto tuning is ON.. Thinking whether we can remove the check at HeapMemoryManager. Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878846#comment-13878846 ] Jonathan Hsieh commented on HBASE-10397: +1. thanks! Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10397.patch Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-10397: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to Trunk. Thanks Ted Jon. Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10397.patch Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878862#comment-13878862 ] Ted Yu commented on HBASE-10395: +1 [~apurtell]: Do you want this in 0.98 ? endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-0.94-v3.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch, HBASE-10395-trunk-v3.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878864#comment-13878864 ] Andrew Purtell commented on HBASE-10395: +1 endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-0.94-v3.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch, HBASE-10395-trunk-v3.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-10395: --- Fix Version/s: 0.99.0 0.98.0 Hadoop Flags: Reviewed Integrated to 0.98 and trunk. Thanks for the patch, Jianwei Thanks for the review, Andy. endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Fix For: 0.98.0, 0.99.0 Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-0.94-v3.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch, HBASE-10395-trunk-v3.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10399) Add documentation for VerifyReplication to refguide
Ted Yu created HBASE-10399: -- Summary: Add documentation for VerifyReplication to refguide Key: HBASE-10399 URL: https://issues.apache.org/jira/browse/HBASE-10399 Project: HBase Issue Type: Improvement Reporter: Ted Yu Priority: Minor HBase refguide currently doesn't document how VerifyReplication is used for comparing local table with remote table. Document for VerifyReplication should be added so that users know how to use it. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878890#comment-13878890 ] Nick Dimiduk commented on HBASE-10392: -- It might me better if HeapMemoryManager would concede the check to HBaseConfiguration. Neither class are considering the presence of BucketCache running in heap mode, which I intend to address in a different ticket. Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Status: Open (was: Patch Available) Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Attachment: HBASE-10392.1.patch Defer to o.a.h.conf.Configuration#addDeprecation where appropriate. Fix additional book references and fix site build. Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Status: Patch Available (was: Open) Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost
[ https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vandana Ayyalasomayajula updated HBASE-10338: - Attachment: HBASE-10338_addendum.patch This patch should fix the NPE as the region server coprocessor host is not initialized in the constructor. Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost Key: HBASE-10338 URL: https://issues.apache.org/jira/browse/HBASE-10338 Project: HBase Issue Type: Bug Components: Coprocessors, regionserver Affects Versions: 0.98.0 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 10338.1.patch, HBASE-10338.0.patch, HBASE-10338_addendum.patch Runtime exception is being thrown when AccessController CP is used with region server. This is happening as region server co processor host is created before zookeeper is initialized in region server. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878916#comment-13878916 ] Hadoop QA commented on HBASE-10392: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624378/HBASE-10392.1.patch against trunk revision . ATTACHMENT ID: 12624378 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Here is snippet of errors: {code}[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hbase-common: Compilation failure: Compilation failure: [ERROR] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[51,15] sun.misc.Unsafe is Sun proprietary API and may be removed in a future release [ERROR] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[1110,19] sun.misc.Unsafe is Sun proprietary API and may be removed in a future release [ERROR] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[1116,21] sun.misc.Unsafe is Sun proprietary API and may be removed in a future release [ERROR] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java:[1121,28] sun.misc.Unsafe is Sun proprietary API and may be removed in a future release [ERROR] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java:[103,8] cannot find symbol -- org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) on project hbase-common: Compilation failure at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59) -- Caused by: org.apache.maven.plugin.CompilationFailureException: Compilation failure at org.apache.maven.plugin.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:729) at org.apache.maven.plugin.CompilerMojo.execute(CompilerMojo.java:128) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209) ... 19 more{code} Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8497//console This message is automatically generated. Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-7320) Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray
[ https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878927#comment-13878927 ] Nick Dimiduk commented on HBASE-7320: - [~lhofhansl] if I follow your intentions, this means: # KeyValue#getBuffer goes away entirely -- there's API assumption that a KeyValue is backed by a single buffer object of any type (byte[], ByteBuffer, c.). A KeyValue instance /could/ be backed by a single buffer object, at the option of its creator, but this is an implementation detail. # KeyValue objects by API design is now backed by 5 buffer objects -- one for each rowkey, cf, qualifier, ts, and value. # previous point does not restrict some producer of KeyValue instances from using it's on encoding of multiple instances, but it does require that producer to generate instances that conform to this API. For example, say I wanted to store KeyValues in batches of 100 where all rowkeys are stored together, then all cf, then quals, then ts, then values and make optimizations therein. The requirement is I can produce a KeyValue instance from the block that implement getXXXArray methods AND I no longer must materialize a buffer object for support of getBuffer. Did I get that right? Do we have any thought on what an appropriate buffer object should be? Is that for another ticket? Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray Key: HBASE-7320 URL: https://issues.apache.org/jira/browse/HBASE-7320 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: stack Fix For: 0.98.0 In many places this is simple task of just replacing the method name. There, however, quite a few places where we assume that: # the entire KV is backed by a single byte array # the KVs key portion is backed by a single byte array Some of those can easily be fixed, others will need their own jiras. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878951#comment-13878951 ] Andrew Purtell commented on HBASE-10336: bq. Looks like this JIRA would get us pass the above issue. Also, I hate to ask, but [~yuzhih...@gmail.com] did you check that applying this patch fixes the problem you have reported? Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878948#comment-13878948 ] Andrew Purtell commented on HBASE-10336: This is a pretty substantial and risky change to go in so late just ahead of RC0, which I am likely to tag today. It should go on trunk only. However, that does mean neither 0.96.x nor 0.98.x will work on Hadoop 2.4.0 unless this change can go in on a minor bump. Ping [~stack] - would you put this in on a 0.96 minor? (I doubt it, just checking.) Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878952#comment-13878952 ] Eric Charles commented on HBASE-10336: -- [~te...@apache.org] I had that error with trunk on h3-snapshot, and yes, this patch solves it. - I am also double checking the other error you mentioned. I am working on trunk: against which 0.98.x branch would you like to port this? Finally oops, although on my top-3 for today, my dev time is exhausted, so this will be hopefully for tomorrow. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost
[ https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878961#comment-13878961 ] Andrew Purtell commented on HBASE-10338: bq. This patch should fix the NPE as the region server coprocessor host is not initialized in the constructor. lgtm, applying addendum to trunk and 0.98. Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost Key: HBASE-10338 URL: https://issues.apache.org/jira/browse/HBASE-10338 Project: HBase Issue Type: Bug Components: Coprocessors, regionserver Affects Versions: 0.98.0 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 10338.1.patch, HBASE-10338.0.patch, HBASE-10338_addendum.patch Runtime exception is being thrown when AccessController CP is used with region server. This is happening as region server co processor host is created before zookeeper is initialized in region server. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10394) Test for Replication with tags
[ https://issues.apache.org/jira/browse/HBASE-10394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878966#comment-13878966 ] Andrew Purtell commented on HBASE-10394: +1 for trunk and 0.98. I will commit this shortly unless objection. Test for Replication with tags -- Key: HBASE-10394 URL: https://issues.apache.org/jira/browse/HBASE-10394 Project: HBase Issue Type: Test Affects Versions: 0.98.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Attachments: HBASE-10394.patch Followup task for HBASE-10322 for adding a test to assert Replication works well and replicate cells with Tags when tags are being used. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878971#comment-13878971 ] Eric Charles commented on HBASE-10336: -- [~apurtell] I hate to say it, but yes, it will fix the reported issue. Will [~te...@apache.org] fall on another issue, who knows. At least, here it runs fine on h3-snapshot. I will come back in one hour and see if there is a demand for a final patch. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Status: Patch Available (was: Open) Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, HBASE-10392.2.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Attachment: HBASE-10392.2.patch once more for hadoop1 profile. this is less of a code change and more of a doc patch at this point... Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, HBASE-10392.2.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-10392: - Status: Open (was: Patch Available) Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, HBASE-10392.2.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878975#comment-13878975 ] Andrew Purtell commented on HBASE-10336: bq. [~apurtell] I hate to say it, but yes, it will fix the reported issue. :-) bq. I will come back in one hour and see if there is a demand for a final patch. If there's a patch that fixes this issue I promise to try it and consider it. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878978#comment-13878978 ] stack commented on HBASE-10336: --- Not for a point release in 0.96. The exception should be carried back up to hadoop. There is a contract in place. It says we are allowed access at the head of the hadoop class. Lets fix hadoop branch-2 before it releases. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878979#comment-13878979 ] Haohui Mai commented on HBASE-10336: The following code has been deprecated: {code} + + static { +Configuration conf = new Configuration(); +boolean sslEnabled = conf.getBoolean( +CommonConfigurationKeysPublic.HADOOP_SSL_ENABLED_KEY, +CommonConfigurationKeysPublic.HADOOP_SSL_ENABLED_DEFAULT); +policy = sslEnabled ? Policy.HTTPS_ONLY : Policy.HTTP_ONLY; + } + + public static void setPolicy(Policy policy) { +HttpConfig.policy = policy; + } + + public static boolean isSecure() { +return policy == Policy.HTTPS_ONLY; + } + + public static String getSchemePrefix() { +return (isSecure()) ? https://; : http://;; + } + + public static String getScheme(Policy policy) { +return policy == Policy.HTTPS_ONLY ? https://; : http://;; + } {code} {code} +/** + * Use setAppDir() instead. + */ +@Deprecated +public Builder setName(String name){ + this.name = name; + return this; +} + +/** + * Use addEndpoint() instead. + */ +@Deprecated +public Builder setBindAddress(String bindAddress){ + this.bindAddress = bindAddress; + return this; +} + +/** + * Use addEndpoint() instead. + */ +@Deprecated +public Builder setPort(int port) { + this.port = port; + return this; +} + {code} And there are significant amount of code in {{HttpServer}} has been marked as deprecated. You might want to get rid of them as well. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost
[ https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878982#comment-13878982 ] stack commented on HBASE-10338: --- I need it in 0.96 too boss [~apurtell] Thanks. (Thanks [~avandana] for fixing while traveling). Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost Key: HBASE-10338 URL: https://issues.apache.org/jira/browse/HBASE-10338 Project: HBase Issue Type: Bug Components: Coprocessors, regionserver Affects Versions: 0.98.0 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 10338.1.patch, HBASE-10338.0.patch, HBASE-10338_addendum.patch Runtime exception is being thrown when AccessController CP is used with region server. This is happening as region server co processor host is created before zookeeper is initialized in region server. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878980#comment-13878980 ] Andrew Purtell commented on HBASE-10336: bq. h3-snapshot. h3-snapshot is branch-2 HEAD and HBase trunk, that right? Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost
[ https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878986#comment-13878986 ] Andrew Purtell commented on HBASE-10338: ... and will commit fix to 0.96 also, thanks [~stack] Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost Key: HBASE-10338 URL: https://issues.apache.org/jira/browse/HBASE-10338 Project: HBase Issue Type: Bug Components: Coprocessors, regionserver Affects Versions: 0.98.0 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 10338.1.patch, HBASE-10338.0.patch, HBASE-10338_addendum.patch Runtime exception is being thrown when AccessController CP is used with region server. This is happening as region server co processor host is created before zookeeper is initialized in region server. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878983#comment-13878983 ] Andrew Purtell commented on HBASE-10336: bq. There is a contract in place. It says we are allowed access at the head of the hadoop class. Lets fix hadoop branch-2 before it releases. Ok, you heard the man. This is very unlikely to make the 0.98.0 RC 0 because I want to tag it today after a couple more things go in. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878989#comment-13878989 ] Eric Charles commented on HBASE-10336: -- [~apurtell] I run hbase-trunk on hadoop-trunk. To run it successfully, I have two options: 1. The tiny patch of HADOOP-10232 rejected by [~wheat9] 2. The large patch of HBASE-10336 which is nearly ready, but for which I miss one hour. my bad, and I wouldn't want to inject incomplete and potentially buggy code in a hbase release. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878999#comment-13878999 ] Andrew Purtell commented on HBASE-10336: Thanks [~e...@apache.org] bq. but for which I miss one hour. my bad It's not you. The solution to this problem looks to need doing upstream. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879002#comment-13879002 ] stack commented on HBASE-10336: --- [~wheat9] it is deprecated... and the implication is? That you will purge in a 2.4 release? Why not in a 3.0? Which hadoop issue is this? Thanks. (For a whiff of why we are disturbed, we would like to avoid there being yet more variants on our hbase x hadoop table here http://hbase.apache.org/book.html#hadoop -- it is frightening enough as it is) Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10399) Add documentation for VerifyReplication to refguide
[ https://issues.apache.org/jira/browse/HBASE-10399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879012#comment-13879012 ] stack commented on HBASE-10399: --- [~ted_yu] You going to do it? Else suggest closing this issue. If we had an issue for all that was undocumented we'd be at 100k issues. Add documentation for VerifyReplication to refguide --- Key: HBASE-10399 URL: https://issues.apache.org/jira/browse/HBASE-10399 Project: HBase Issue Type: Improvement Reporter: Ted Yu Priority: Minor HBase refguide currently doesn't document how VerifyReplication is used for comparing local table with remote table. Document for VerifyReplication should be added so that users know how to use it. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879013#comment-13879013 ] Andrew Purtell commented on HBASE-10336: The exception Ted reported above has been reported upstream as HADOOP-10252 looks like. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879038#comment-13879038 ] stack commented on HBASE-10336: --- Is [~ted_yu] not picking up HADOOP-10252 in his testing or does that patch not totally fix the issue? Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879037#comment-13879037 ] Haohui Mai commented on HBASE-10336: HDFS-5305 and related jiras implement HTTPS support, as a side effect they have changed a lot how {{HttpServer}} works. I anticipate that there are more changes for {{HttpServer}} in the near term. We want to keep it as an private API so that we can keep cleaning it up and without worrying about compatibility. For distribution, the approach of this jira might work better. It seems to me that this is a reliable way to not making the table you've mentioned more frightening. :-) Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879039#comment-13879039 ] Ted Yu commented on HBASE-10336: @Andy: I was about to test patch v5 which applies cleanly on 0.98 in a cluster. Somehow there was rollback in local repo. I will proceed and report back. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879046#comment-13879046 ] Andrew Purtell commented on HBASE-10336: bq. I was about to test patch v5 which applies cleanly on 0.98 in a cluster. It doesn't matter because this change isn't going to make RC0. I'm guessing we won't have just one release candidate, but a change of this magnitude is not likely to go in on any subsequent RC because it addresses a future problem. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Comment Edited] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879046#comment-13879046 ] Andrew Purtell edited comment on HBASE-10336 at 1/22/14 7:15 PM: - bq. I was about to test patch v5 which applies cleanly on 0.98 in a cluster. It doesn't matter because this change isn't going to make RC0. I'm guessing we won't have just one release candidate, but this is not likely to go in on any subsequent RC because it only addresses a future problem in exchange for risky big changes. was (Author: apurtell): bq. I was about to test patch v5 which applies cleanly on 0.98 in a cluster. It doesn't matter because this change isn't going to make RC0. I'm guessing we won't have just one release candidate, but a change of this magnitude is not likely to go in on any subsequent RC because it addresses a future problem. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879055#comment-13879055 ] stack commented on HBASE-10336: --- [~wheat9] All well and good. We are working to undo our dependency. This patch will go in soon but as [~jxiang] says up in HADOOP-5305, we want both the deprecated APIs to remain in place a while. We've just 'learned' of your effort to purge this code in 2.x timeframe. Unless there is some hard reason that the deprecated methods cannot stay in place (other than you have not tested them -- we'll yell and patch if you break something), rather than piss off a downstream project and put us in an awkward spot, I suggest leave them in place deprecated. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879067#comment-13879067 ] Eric Charles commented on HBASE-10336: -- If [~te...@apache.org] confirms patch v5 works well, it is safe to commit and release. The cleaning is more about class servlet move to http package, gitignore file to remove, hadoop key to rename/document, @InterfaceAudience to be removed. This can be done after. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879071#comment-13879071 ] Andrew Purtell commented on HBASE-10336: bq. If Ted Yu confirms patch v5 works well, it is safe to commit and release. Well two HBase RMs disagree with your assessment. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879081#comment-13879081 ] Jimmy Xiang commented on HBASE-10336: - FYI HADOOP-10252 hasn't fixed all issues related to the deprecated constructor. I am still looking into it now. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-7320) Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray
[ https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879080#comment-13879080 ] Lars Hofhansl commented on HBASE-7320: -- Exactly. Would have to think a bit more about the TS. Do we want to this be backed by byte[] as well, or just completely get rid of the array/offset/length API for TS and just a getTimeStamp method? Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray Key: HBASE-7320 URL: https://issues.apache.org/jira/browse/HBASE-7320 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: stack Fix For: 0.98.0 In many places this is simple task of just replacing the method name. There, however, quite a few places where we assume that: # the entire KV is backed by a single byte array # the KVs key portion is backed by a single byte array Some of those can easily be fixed, others will need their own jiras. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-7404) Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
[ https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879091#comment-13879091 ] Liyin Tang commented on HBASE-7404: --- Liang, just curious, what's the top contributor for the p99 latency in your case ? Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE -- Key: HBASE-7404 URL: https://issues.apache.org/jira/browse/HBASE-7404 Project: HBase Issue Type: New Feature Affects Versions: 0.94.3 Reporter: chunhui shen Assignee: chunhui shen Fix For: 0.95.0 Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, HBASE-7404-backport-0.94.patch, Introduction of Bucket Cache.pdf, hbase-7404-94v2.patch, hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch First, thanks @neil from Fusion-IO share the source code. Usage: 1.Use bucket cache as main memory cache, configured as the following: –hbase.bucketcache.ioengine heap (or offheap if using offheap memory to cache block ) –hbase.bucketcache.size 0.4 (size for bucket cache, 0.4 is a percentage of max heap size) 2.Use bucket cache as a secondary cache, configured as the following: –hbase.bucketcache.ioengine file:/disk1/hbase/cache.data(The file path where to store the block data) –hbase.bucketcache.size 1024 (size for bucket cache, unit is MB, so 1024 means 1GB) –hbase.bucketcache.combinedcache.enabled false (default value being true) See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and org.apache.hadoop.hbase.io.hfile.bucket.BucketCache What's Bucket Cache? It could greatly decrease CMS and heap fragment by GC It support a large cache space for High Read Performance by using high speed disk like Fusion-io 1.An implementation of block cache like LruBlockCache 2.Self manage blocks' storage position through Bucket Allocator 3.The cached blocks could be stored in the memory or file system 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), combined with LruBlockCache to decrease CMS and fragment by GC. 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to store block) to enlarge cache space How about SlabCache? We have studied and test SlabCache first, but the result is bad, because: 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds of block size, especially using DataBlockEncoding 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , it causes CMS and heap fragment don't get any better 3.Direct heap performance is not good as heap, and maybe cause OOM, so we recommend using heap engine See more in the attachment and in the patch -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code
[ https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879099#comment-13879099 ] Hudson commented on HBASE-10375: SUCCESS: Integrated in HBase-0.98 #103 (See [https://builds.apache.org/job/HBase-0.98/103/]) HBASE-10375 hbase-default.xml hbase.status.multicast.address.port does not match code (nkeywal: rev 1560320) * /hbase/branches/0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java * /hbase/branches/0.98/hbase-common/src/main/resources/hbase-default.xml hbase-default.xml hbase.status.multicast.address.port does not match code - Key: HBASE-10375 URL: https://issues.apache.org/jira/browse/HBASE-10375 Project: HBase Issue Type: Bug Affects Versions: 0.98.0, 0.99.0, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Nicolas Liochon Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10375.v1.98-96.patch, 10375.v1.patch, 10375.v2.96-98.patch, 10375.v2.trunk.patch In hbase-default.xml {code} + property +namehbase.status.multicast.address.port/name +value6100/value +description + Multicast port to use for the status publication by multicast. +/description + /property {code} In HConstants it was 60100. {code} public static final String STATUS_MULTICAST_PORT = hbase.status.multicast.port; public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100; {code} (it was 60100 in the code for 0.96 and 0.98.) I lean towards going with the code as opposed to the config file. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10395) endTime won't be set in VerifyReplication if startTime is not set
[ https://issues.apache.org/jira/browse/HBASE-10395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879100#comment-13879100 ] Hudson commented on HBASE-10395: SUCCESS: Integrated in HBase-0.98 #103 (See [https://builds.apache.org/job/HBase-0.98/103/]) HBASE-10395 endTime won't be set in VerifyReplication if startTime is not set (Jianwei Cui) (Tedyu: rev 1560433) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java endTime won't be set in VerifyReplication if startTime is not set - Key: HBASE-10395 URL: https://issues.apache.org/jira/browse/HBASE-10395 Project: HBase Issue Type: Improvement Components: mapreduce, Replication Affects Versions: 0.94.16 Reporter: cuijianwei Assignee: cuijianwei Priority: Minor Fix For: 0.98.0, 0.99.0 Attachments: HBASE-10395-0.94-v1.patch, HBASE-10395-0.94-v2.patch, HBASE-10395-0.94-v3.patch, HBASE-10395-trunk-v1.patch, HBASE-10395-trunk-v2.patch, HBASE-10395-trunk-v3.patch In VerifyReplication, we may set startTime and endTime to restrict the data to verfiy. However, the endTime won't be set in the program if we only pass endTime without startTime in command line argument. The reason is the following code: {code} if (startTime != 0) { scan.setTimeRange(startTime, endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime); } {code} The code will ignore endTime setting when not passing startTime in command line argument. Another place needs to improvement is the help message as follows: {code} System.err.println( stoprow end of the row); {code} However, the program actually use endrow to parse the arguments: {code} final String endTimeArgKey = --endtime=; if (cmd.startsWith(endTimeArgKey)) { endTime = Long.parseLong(cmd.substring(endTimeArgKey.length())); continue; } {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10400) [hbck] Continue if region dir missing on region merge attempt
Jonathan Hsieh created HBASE-10400: -- Summary: [hbck] Continue if region dir missing on region merge attempt Key: HBASE-10400 URL: https://issues.apache.org/jira/browse/HBASE-10400 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.96.1.1, 0.94.16, 0.92.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh In a recent support case, the hbck repair tool would eventually hang because we didn't handle the case where merge info is old using hadoop2 fs.listStatus semantics (throw exn instead of return null). this is a trivial patch that handles the newer hadoop2 semantics. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879103#comment-13879103 ] Eric Charles commented on HBASE-10336: -- [~jxiang] HADOOP-10252 sounds like a duplicate of HADOOP-10232 for which [1] is a valid patch IMHO. the patch submitted for HADOOP-10252 [2] seems like the first try I made, but had to add more code to make it work [~apurtell] sorry, I should have said IMHO, it would be safe to commit and release if no other issue is found after unit and integration tests + real deployment and traffic on a real cluster [1] https://issues.apache.org/jira/secure/attachment/12622867/HDFS-5760-2.patch [2] https://issues.apache.org/jira/secure/attachment/12624372/hadoop-10252.patch Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10400) [hbck] Continue if region dir missing on region merge attempt
[ https://issues.apache.org/jira/browse/HBASE-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-10400: --- Attachment: hbase-10400.patch Applies to trunk. [hbck] Continue if region dir missing on region merge attempt - Key: HBASE-10400 URL: https://issues.apache.org/jira/browse/HBASE-10400 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.92.2, 0.94.16, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-10400.patch In a recent support case, the hbck repair tool would eventually hang because we didn't handle the case where merge info is old using hadoop2 fs.listStatus semantics (throw exn instead of return null). this is a trivial patch that handles the newer hadoop2 semantics. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10400) [hbck] Continue if region dir missing on region merge attempt
[ https://issues.apache.org/jira/browse/HBASE-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-10400: --- Status: Patch Available (was: Open) [hbck] Continue if region dir missing on region merge attempt - Key: HBASE-10400 URL: https://issues.apache.org/jira/browse/HBASE-10400 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.96.1.1, 0.94.16, 0.92.2 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-10400.patch In a recent support case, the hbck repair tool would eventually hang because we didn't handle the case where merge info is old using hadoop2 fs.listStatus semantics (throw exn instead of return null). this is a trivial patch that handles the newer hadoop2 semantics. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879116#comment-13879116 ] Jimmy Xiang commented on HBASE-10336: - [~echarles], you are right HADOOP-10252 is not complete. I filed HADOOP-10254, with which the mini cluster starts. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-7320) Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray
[ https://issues.apache.org/jira/browse/HBASE-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879120#comment-13879120 ] Matt Corgan commented on HBASE-7320: It would be nice to move towards callers of getTimestamp() expecting the long value returned by the Cell interface. I'm guessing there is little to negative performance gain operating on the bytes directly. Replace calls to KeyValue.getBuffer with appropropriate calls to getRowArray, getFamilyArray(), getQualifierArray, and getValueArray Key: HBASE-7320 URL: https://issues.apache.org/jira/browse/HBASE-7320 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Assignee: stack Fix For: 0.98.0 In many places this is simple task of just replacing the method name. There, however, quite a few places where we assume that: # the entire KV is backed by a single byte array # the KVs key portion is backed by a single byte array Some of those can easily be fixed, others will need their own jiras. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10400) [hbck] Continue if region dir missing on region merge attempt
[ https://issues.apache.org/jira/browse/HBASE-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879118#comment-13879118 ] Jonathan Hsieh commented on HBASE-10400: also applies to 0.96/0.98. I can do a 0.94 version if requested. [hbck] Continue if region dir missing on region merge attempt - Key: HBASE-10400 URL: https://issues.apache.org/jira/browse/HBASE-10400 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.92.2, 0.94.16, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-10400.patch In a recent support case, the hbck repair tool would eventually hang because we didn't handle the case where merge info is old using hadoop2 fs.listStatus semantics (throw exn instead of return null). this is a trivial patch that handles the newer hadoop2 semantics. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10392) Correct references to hbase.regionserver.global.memstore.upperLimit
[ https://issues.apache.org/jira/browse/HBASE-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879122#comment-13879122 ] Hadoop QA commented on HBASE-10392: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12624388/HBASE-10392.2.patch against trunk revision . ATTACHMENT ID: 12624388 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8498//console This message is automatically generated. Correct references to hbase.regionserver.global.memstore.upperLimit --- Key: HBASE-10392 URL: https://issues.apache.org/jira/browse/HBASE-10392 Project: HBase Issue Type: Bug Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.99.0 Attachments: HBASE-10392.0.patch, HBASE-10392.1.patch, HBASE-10392.2.patch As part of the awesome new HBASE-5349, a couple references to {{hbase.regionserver.global.memstore.upperLimit}} was missed. Clean those up to use the new {{hbase.regionserver.global.memstore.size}} instead. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HBASE-10401) [hbck] perform overlap group merges in parallel
Jonathan Hsieh created HBASE-10401: -- Summary: [hbck] perform overlap group merges in parallel Key: HBASE-10401 URL: https://issues.apache.org/jira/browse/HBASE-10401 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.96.1.1, 0.94.16, 0.92.2, 0.98.0, 0.99.0 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh In a recent support case, we encountered a corrupt hbase that had thousands of overlap groups (regions that had overlapping key ranges). The current implementation repairs these by serially taking a group, perorming a merge and then moving on to the next group. Because assignments and hdfs nn operations are involved each merge could take on the order of seconds. With thousands of overlap groups, this could take hours to complete. This patch makes it so that these independent merge groups are merged in parallel. It uses the same thread pool for other fs info-gathering operations. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10336) Remove deprecated usage of Hadoop HttpServer in InfoServer
[ https://issues.apache.org/jira/browse/HBASE-10336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879127#comment-13879127 ] Eric Charles commented on HBASE-10336: -- [~jxiang] With HADOOP-10254, TestInfoServer may be ok, but hbase will fail at runtime, at least I would bet on that. On the other hand HADOOP-10252 is not enough, a patch uploaded here (not the last one) would be needed. Btw, what about simply dropping any change to hadoop and concentrate on the hbase decoupling. It was the plan, no? Sorry, i have to leave. Read you tomorrow. Remove deprecated usage of Hadoop HttpServer in InfoServer -- Key: HBASE-10336 URL: https://issues.apache.org/jira/browse/HBASE-10336 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Eric Charles Assignee: Eric Charles Attachments: HBASE-10336-1.patch, HBASE-10336-2.patch, HBASE-10336-3.patch, HBASE-10336-4.patch, HBASE-10336-5.patch Recent changes in Hadoop HttpServer give NPE when running on hadoop 3.0.0-SNAPSHOT. This way we use HttpServer is deprecated and will probably be not fixed (see HDFS-5760). We'd better move to the new proposed builder pattern, which means we can no more use inheritance to build our nice InfoServer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost
[ https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879131#comment-13879131 ] Andrew Purtell commented on HBASE-10338: Committed addendum to trunk, 0.98, and 0.96. Patch applied everywhere with minor fuzz only, failing test completes successfully with no NPE reported in the log. Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost Key: HBASE-10338 URL: https://issues.apache.org/jira/browse/HBASE-10338 Project: HBase Issue Type: Bug Components: Coprocessors, regionserver Affects Versions: 0.98.0 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Fix For: 0.98.0, 0.96.2, 0.99.0 Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 10338.1.patch, HBASE-10338.0.patch, HBASE-10338_addendum.patch Runtime exception is being thrown when AccessController CP is used with region server. This is happening as region server co processor host is created before zookeeper is initialized in region server. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10397) Fix findbugs introduced from HBASE-9426
[ https://issues.apache.org/jira/browse/HBASE-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879148#comment-13879148 ] Hudson commented on HBASE-10397: SUCCESS: Integrated in HBase-TRUNK #4849 (See [https://builds.apache.org/job/HBase-TRUNK/4849/]) HBASE-10397 Fix findbugs introduced from HBASE-9426. (anoopsamjohn: rev 1560427) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java Fix findbugs introduced from HBASE-9426 --- Key: HBASE-10397 URL: https://issues.apache.org/jira/browse/HBASE-10397 Project: HBase Issue Type: Bug Affects Versions: 0.99.0 Reporter: Anoop Sam John Assignee: Anoop Sam John Priority: Minor Fix For: 0.99.0 Attachments: HBASE-10397.patch Method org.apache.hadoop.hbase.client.HBaseAdmin.execProcedure(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator Method org.apache.hadoop.hbase.client.HBaseAdmin.isProcedureFinished(String, String, Map) makes inefficient use of keySet iterator instead of entrySet iterator -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HBASE-10401) [hbck] perform overlap group merges in parallel
[ https://issues.apache.org/jira/browse/HBASE-10401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hsieh updated HBASE-10401: --- Attachment: hbase-10401.patch applies to trunk, should apply to 0.96/0.98. [hbck] perform overlap group merges in parallel --- Key: HBASE-10401 URL: https://issues.apache.org/jira/browse/HBASE-10401 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.92.2, 0.98.0, 0.94.16, 0.99.0, 0.96.1.1 Reporter: Jonathan Hsieh Assignee: Jonathan Hsieh Attachments: hbase-10401.patch In a recent support case, we encountered a corrupt hbase that had thousands of overlap groups (regions that had overlapping key ranges). The current implementation repairs these by serially taking a group, perorming a merge and then moving on to the next group. Because assignments and hdfs nn operations are involved each merge could take on the order of seconds. With thousands of overlap groups, this could take hours to complete. This patch makes it so that these independent merge groups are merged in parallel. It uses the same thread pool for other fs info-gathering operations. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-9426) Make custom distributed barrier procedure pluggable
[ https://issues.apache.org/jira/browse/HBASE-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13879146#comment-13879146 ] Hudson commented on HBASE-9426: --- SUCCESS: Integrated in HBase-TRUNK #4849 (See [https://builds.apache.org/job/HBase-TRUNK/4849/]) HBASE-10397 Fix findbugs introduced from HBASE-9426. (anoopsamjohn: rev 1560427) * /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java Make custom distributed barrier procedure pluggable Key: HBASE-9426 URL: https://issues.apache.org/jira/browse/HBASE-9426 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.11 Reporter: Richard Ding Assignee: Richard Ding Fix For: 0.99.0 Attachments: HBASE-9426-4.patch, HBASE-9426-4.patch, HBASE-9426-6.patch, HBASE-9426-7.patch, HBASE-9426.patch.1, HBASE-9426.patch.2, HBASE-9426.patch.3 Currently if one wants to implement a custom distributed barrier procedure (e.g., distributed log roll or distributed table flush), the HBase core code needs to be modified in order for the procedure to work. Looking into the snapshot code (especially on region server side), most of the code to enable the procedure are generic life-cycle management (i.e., init, start, stop). We can make this part pluggable. Here is the proposal. Following the coprocessor example, we define two properties: {code} hbase.procedure.regionserver.classes hbase.procedure.master.classes {code} The values for both are comma delimited list of classes. On region server side, the classes implements the following interface: {code} public interface RegionServerProcedureManager { public void initialize(RegionServerServices rss) throws KeeperException; public void start(); public void stop(boolean force) throws IOException; public String getProcedureName(); } {code} While on Master side, the classes implement the interface: {code} public interface MasterProcedureManager { public void initialize(MasterServices master) throws KeeperException, IOException, UnsupportedOperationException; public void stop(String why); public String getProcedureName(); public void execProcedure(ProcedureDescription desc) throws IOException; IOException; } {code} Where the ProcedureDescription is defined as {code} message ProcedureDescription { required string name = 1; required string instance = 2; optional int64 creationTime = 3 [default = 0]; message Property { required string tag = 1; optional string value = 2; } repeated Property props = 4; } {code} A generic API can be defined on HMaster to trigger a procedure: {code} public boolean execProcedure(ProcedureDescription desc) throws IOException; {code} _SnapshotManager_ and _RegionServerSnapshotManager_ are special examples of _MasterProcedureManager_ and _RegionServerProcedureManager_. They will be automatically included (users don't need to specify them in the conf file). -- This message was sent by Atlassian JIRA (v6.1.5#6160)