[jira] [Commented] (HBASE-17646) Implement Async getRegion method
[ https://issues.apache.org/jira/browse/HBASE-17646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891764#comment-15891764 ] Hudson commented on HBASE-17646: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2597 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2597/]) HBASE-17646: Implement Async getRegion method (zhangduo: rev 697a55a8782d940aa4f1287c2ef4a45ba516cac1) * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncConnectionConfiguration.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncTableImpl.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdmin.java * (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/AsyncMetaTableAccessor.java > Implement Async getRegion method > > > Key: HBASE-17646 > URL: https://issues.apache.org/jira/browse/HBASE-17646 > Project: HBase > Issue Type: Sub-task > Components: asyncclient >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Labels: asynchronous > Fix For: 2.0.0 > > Attachments: HBASE-17646.v1.patch, HBASE-17646.v2.patch, > HBASE-17646.v3.patch, HBASE-17646.v4.patch > > > There are some async admin APIs which depends on async getRegion method. > Such as : > 1. closeRegion. > 2. flushRegion. > 3. compactRegion. > 4. mergeRegion. > 5. splitRegion. > and so on . > So, implement async getRegion method first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block
[ https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891761#comment-15891761 ] ramkrishna.s.vasudevan commented on HBASE-17623: Ya thanks for the update. Got it. Just for confirmation I asked. Already baosInMemory was ByteArrayOS only. So no problem. > Reuse the bytes array when building the hfile block > --- > > Key: HBASE-17623 > URL: https://issues.apache.org/jira/browse/HBASE-17623 > Project: HBase > Issue Type: Improvement >Reporter: CHIA-PING TSAI >Assignee: CHIA-PING TSAI >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: after(snappy_hfilesize=5.04GB).png, > after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, > before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, > HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, > HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx > > > There are two improvements. > # The onDiskBlockBytesWithHeader should maintain a bytes array which can be > reused when building the hfile. > # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we > need to cache the block. > # If no block need to be cached, the uncompressedBlockBytesWithHeader will > never be created. > {code:title=HFileBlock.java|borderStyle=solid} > private void finishBlock() throws IOException { > if (blockType == BlockType.DATA) { > this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, > userDataStream, > baosInMemory.getBuffer(), blockType); > blockType = dataBlockEncodingCtx.getBlockType(); > } > userDataStream.flush(); > // This does an array copy, so it is safe to cache this byte array when > cache-on-write. > // Header is still the empty, 'dummy' header that is yet to be filled > out. > uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > prevOffset = prevOffsetByType[blockType.getId()]; > // We need to set state before we can package the block up for > cache-on-write. In a way, the > // block is ready, but not yet encoded or compressed. > state = State.BLOCK_READY; > if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) > { > onDiskBlockBytesWithHeader = dataBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } else { > onDiskBlockBytesWithHeader = defaultBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } > // Calculate how many bytes we need for checksum on the tail of the > block. > int numBytes = (int) ChecksumUtil.numBytes( > onDiskBlockBytesWithHeader.length, > fileContext.getBytesPerChecksum()); > // Put the header for the on disk bytes; header currently is > unfilled-out > putHeader(onDiskBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > // Set the header for the uncompressed bytes (for cache-on-write) -- > IFF different from > // onDiskBlockBytesWithHeader array. > if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) { > putHeader(uncompressedBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > } > if (onDiskChecksum.length != numBytes) { > onDiskChecksum = new byte[numBytes]; > } > ChecksumUtil.generateChecksums( > onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length, > onDiskChecksum, 0, fileContext.getChecksumType(), > fileContext.getBytesPerChecksum()); > }{code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block
[ https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891747#comment-15891747 ] CHIA-PING TSAI commented on HBASE-17623: The HFileBlock.Writer#startWriting will write the dummy header for reserving the space. -- (1) {noformat} baosInMemory.reset(); baosInMemory.write(HConstants.HFILEBLOCK_DUMMY_HEADER); {noformat} If the type is BlockType.Data, the BufferedDataBlockEncoder#startBlockEncoding will reserve the space for the id. --(2) {noformat} if (newBlockType == BlockType.DATA) { this.dataBlockEncoder.startBlockEncoding(dataBlockEncodingCtx, userDataStream); } {noformat} {noformat} StreamUtils.writeInt(out, 0); // DUMMY length. This will be updated in endBlockEncoding() {noformat} > Reuse the bytes array when building the hfile block > --- > > Key: HBASE-17623 > URL: https://issues.apache.org/jira/browse/HBASE-17623 > Project: HBase > Issue Type: Improvement >Reporter: CHIA-PING TSAI >Assignee: CHIA-PING TSAI >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: after(snappy_hfilesize=5.04GB).png, > after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, > before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, > HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, > HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx > > > There are two improvements. > # The onDiskBlockBytesWithHeader should maintain a bytes array which can be > reused when building the hfile. > # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we > need to cache the block. > # If no block need to be cached, the uncompressedBlockBytesWithHeader will > never be created. > {code:title=HFileBlock.java|borderStyle=solid} > private void finishBlock() throws IOException { > if (blockType == BlockType.DATA) { > this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, > userDataStream, > baosInMemory.getBuffer(), blockType); > blockType = dataBlockEncodingCtx.getBlockType(); > } > userDataStream.flush(); > // This does an array copy, so it is safe to cache this byte array when > cache-on-write. > // Header is still the empty, 'dummy' header that is yet to be filled > out. > uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > prevOffset = prevOffsetByType[blockType.getId()]; > // We need to set state before we can package the block up for > cache-on-write. In a way, the > // block is ready, but not yet encoded or compressed. > state = State.BLOCK_READY; > if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) > { > onDiskBlockBytesWithHeader = dataBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } else { > onDiskBlockBytesWithHeader = defaultBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } > // Calculate how many bytes we need for checksum on the tail of the > block. > int numBytes = (int) ChecksumUtil.numBytes( > onDiskBlockBytesWithHeader.length, > fileContext.getBytesPerChecksum()); > // Put the header for the on disk bytes; header currently is > unfilled-out > putHeader(onDiskBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > // Set the header for the uncompressed bytes (for cache-on-write) -- > IFF different from > // onDiskBlockBytesWithHeader array. > if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) { > putHeader(uncompressedBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > } > if (onDiskChecksum.length != numBytes) { > onDiskChecksum = new byte[numBytes]; > } > ChecksumUtil.generateChecksums( > onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length, > onDiskChecksum, 0, fileContext.getChecksumType(), > fileContext.getBytesPerChecksum()); > }{code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17718) Difference between RS's servername and its ephemeral node cause SSH stop working
[ https://issues.apache.org/jira/browse/HBASE-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allan Yang updated HBASE-17718: --- Description: After HBASE-9593, RS put up an ephemeral node in ZK before reporting for duty. But if the hosts config (/etc/hosts) is different between master and RS, RS's serverName can be different from the one stored the ephemeral zk node. The email metioned in HBASE-13753 (http://mail-archives.apache.org/mod_mbox/hbase-user/201505.mbox/%3CCANZDn9ueFEEuZMx=pZdmtLsdGLyZz=rrm1N6EQvLswYc1z-H=g...@mail.gmail.com%3E) is exactly what happened in our production env. But what the email didn't point out is that the difference between serverName in RS and zk node can cause SSH stop to work. as we can see from the code in {{RegionServerTracker}} {code} @Override public void nodeDeleted(String path) { if (path.startsWith(watcher.rsZNode)) { String serverName = ZKUtil.getNodeName(path); LOG.info("RegionServer ephemeral node deleted, processing expiration [" + serverName + "]"); ServerName sn = ServerName.parseServerName(serverName); if (!serverManager.isServerOnline(sn)) { LOG.warn(serverName.toString() + " is not online or isn't known to the master."+ "The latter could be caused by a DNS misconfiguration."); return; } remove(sn); this.serverManager.expireServer(sn); } } {code} The server will not be processed by SSH/ServerCrashProcedure. The regions on this server will not been assigned again until master restart or failover. I know HBASE-9593 was to fix the issue if RS report to duty and crashed before it can put up a zk node. It is a very rare case(And controllable, just fix the bug making rs to crash). But The issue I metioned can happened more often(and uncontrollable, can't be fixed in HBase, due to DNS, hosts config, etc.) and have more severe consequence. So here I offer some solutions to discuss: 1. Revert HBASE-9593 from all branches, Andrew Purtell has reverted it in branch-0.98 2. Abort RS if master return a different name, otherwise SSH can't work properly 3. Master accepts whatever servername reported by RS and don't change it. was: After HBASE-9593, RS put up an ephemeral node in ZK before reporting for duty. But if the hosts config (/etc/hosts) is different between master and RS, RS's serverName can be different from the one stored the ephemeral zk node. The email metioned in HBASE-13753 (http://mail-archives.apache.org/mod_mbox/hbase-user/201505.mbox/%3CCANZDn9ueFEEuZMx=pZdmtLsdGLyZz=rrm1N6EQvLswYc1z-H=g...@mail.gmail.com%3E) is exactly what happened in our production env. But what the email didn't point out is that the difference between serverName in RS and zk node can cause SSH stop to work. as we can see from the code in {{RegionServerTracker}} {code} @Override public void nodeDeleted(String path) { if (path.startsWith(watcher.rsZNode)) { String serverName = ZKUtil.getNodeName(path); LOG.info("RegionServer ephemeral node deleted, processing expiration [" + serverName + "]"); ServerName sn = ServerName.parseServerName(serverName); if (!serverManager.isServerOnline(sn)) { LOG.warn(serverName.toString() + " is not online or isn't known to the master."+ "The latter could be caused by a DNS misconfiguration."); return; } remove(sn); this.serverManager.expireServer(sn); } } {code} The server will not be processed by SSH/ServerCrashProcedure. The regions on this server will not been assigned again until master restart or failover. I know HBASE-9593 was to fix the issue if RS report to duty and crashed before it can put up a zk node. It is a very rare case(And controllable, just fix the bug making rs to crash). But The issue I metioned can happened more often(and uncontrollable, can't be fixed in HBase, due to DNS, hosts config, etc.) and have more severe consequence. So here I offer some solutions to discuss: 1. Revert HBASE-9593 from all branches, Andrew Purtell has reverted it in branch-0.98 2. Abort RS if master return a different name, otherwise SSH can't work properly 3. Master receive whatever servername reported by RS and don't change it. > Difference between RS's servername and its ephemeral node cause SSH stop > working > > > Key: HBASE-17718 > URL: https://issues.apache.org/jira/browse/HBASE-17718 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.2.4, 1.1.8 >Reporter: Allan Yang >Assignee: Allan Yang > > After HBASE-9593, RS put up an ephemeral node in ZK before reporting for > duty. But if the hosts config (/etc/hosts) is different between master and > RS, RS's serverName can be different from the one
[jira] [Updated] (HBASE-17718) Difference between RS's servername and its ephemeral node cause SSH stop working
[ https://issues.apache.org/jira/browse/HBASE-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allan Yang updated HBASE-17718: --- Description: After HBASE-9593, RS put up an ephemeral node in ZK before reporting for duty. But if the hosts config (/etc/hosts) is different between master and RS, RS's serverName can be different from the one stored the ephemeral zk node. The email metioned in HBASE-13753 (http://mail-archives.apache.org/mod_mbox/hbase-user/201505.mbox/%3CCANZDn9ueFEEuZMx=pZdmtLsdGLyZz=rrm1N6EQvLswYc1z-H=g...@mail.gmail.com%3E) is exactly what happened in our production env. But what the email didn't point out is that the difference between serverName in RS and zk node can cause SSH stop to work. as we can see from the code in {{RegionServerTracker}} {code} @Override public void nodeDeleted(String path) { if (path.startsWith(watcher.rsZNode)) { String serverName = ZKUtil.getNodeName(path); LOG.info("RegionServer ephemeral node deleted, processing expiration [" + serverName + "]"); ServerName sn = ServerName.parseServerName(serverName); if (!serverManager.isServerOnline(sn)) { LOG.warn(serverName.toString() + " is not online or isn't known to the master."+ "The latter could be caused by a DNS misconfiguration."); return; } remove(sn); this.serverManager.expireServer(sn); } } {code} The server will not be processed by SSH/ServerCrashProcedure. The regions on this server will not been assigned again until master restart or failover. I know HBASE-9593 was to fix the issue if RS report to duty and crashed before it can put up a zk node. It is a very rare case(And controllable, just fix the bug making rs to crash). But The issue I metioned can happened more often(and uncontrollable, can't be fixed in HBase, due to DNS, hosts config, etc.) and have more severe consequence. So here I offer some solutions to discuss: 1. Revert HBASE-9593 from all branches, Andrew Purtell has reverted it in branch-0.98 2. Abort RS if master return a different name, otherwise SSH can't work properly 3. Master receive whatever servername reported by RS and don't change it. was: After HBASE-9593, RS put up an ephemeral node in ZK before reporting for duty. But if the hosts config (/etc/hosts) is different between master and RS, RS's serverName can be different from the one stored the ephemeral zk node. The email metioned in HBASE-13753 (http://mail-archives.apache.org/mod_mbox/hbase-user/201505.mbox/%3CCANZDn9ueFEEuZMx=pZdmtLsdGLyZz=rrm1N6EQvLswYc1z-H=g...@mail.gmail.com%3E) is exactly what happened in our production env. But what the email didn't point out is that the difference between serverName in RS and zk node can cause SSH stop to work. as we can see from the code in {{RegionServerTracker}} {code} @Override public void nodeDeleted(String path) { if (path.startsWith(watcher.rsZNode)) { String serverName = ZKUtil.getNodeName(path); LOG.info("RegionServer ephemeral node deleted, processing expiration [" + serverName + "]"); ServerName sn = ServerName.parseServerName(serverName); if (!serverManager.isServerOnline(sn)) { LOG.warn(serverName.toString() + " is not online or isn't known to the master."+ "The latter could be caused by a DNS misconfiguration."); return; } remove(sn); this.serverManager.expireServer(sn); } } {code} The server will not be processed by SSH/ServerCrashProcedure. The regions on this server will not been assigned again until master restart or failover. I know HBASE-9593 was to fix the issue if RS report to duty and crashed before it can put up a zk node. It is a very rare case. But The issue I metioned can happened more often(due to DNS, config, etc.) and have more severe consequence. So here I offer some solutions to discuss: 1. Revert HBASE-9593 from all branches, Andrew Purtell has reverted it in branch-0.98 2. Abort RS if master return a different name, otherwise SSH can't work properly 3. Master receive whatever servername reported by RS and don't change it. > Difference between RS's servername and its ephemeral node cause SSH stop > working > > > Key: HBASE-17718 > URL: https://issues.apache.org/jira/browse/HBASE-17718 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.2.4, 1.1.8 >Reporter: Allan Yang >Assignee: Allan Yang > > After HBASE-9593, RS put up an ephemeral node in ZK before reporting for > duty. But if the hosts config (/etc/hosts) is different between master and > RS, RS's serverName can be different from the one stored the ephemeral zk > node. The email metioned in HBASE-13753 >
[jira] [Created] (HBASE-17718) Difference between RS's servername and its ephemeral node cause SSH stop working
Allan Yang created HBASE-17718: -- Summary: Difference between RS's servername and its ephemeral node cause SSH stop working Key: HBASE-17718 URL: https://issues.apache.org/jira/browse/HBASE-17718 Project: HBase Issue Type: Bug Affects Versions: 1.1.8, 1.2.4, 2.0.0 Reporter: Allan Yang Assignee: Allan Yang After HBASE-9593, RS put up an ephemeral node in ZK before reporting for duty. But if the hosts config (/etc/hosts) is different between master and RS, RS's serverName can be different from the one stored the ephemeral zk node. The email metioned in HBASE-13753 (http://mail-archives.apache.org/mod_mbox/hbase-user/201505.mbox/%3CCANZDn9ueFEEuZMx=pZdmtLsdGLyZz=rrm1N6EQvLswYc1z-H=g...@mail.gmail.com%3E) is exactly what happened in our production env. But what the email didn't point out is that the difference between serverName in RS and zk node can cause SSH stop to work. as we can see from the code in {{RegionServerTracker}} {code} @Override public void nodeDeleted(String path) { if (path.startsWith(watcher.rsZNode)) { String serverName = ZKUtil.getNodeName(path); LOG.info("RegionServer ephemeral node deleted, processing expiration [" + serverName + "]"); ServerName sn = ServerName.parseServerName(serverName); if (!serverManager.isServerOnline(sn)) { LOG.warn(serverName.toString() + " is not online or isn't known to the master."+ "The latter could be caused by a DNS misconfiguration."); return; } remove(sn); this.serverManager.expireServer(sn); } } {code} The server will not be processed by SSH/ServerCrashProcedure. The regions on this server will not been assigned again until master restart or failover. I know HBASE-9593 was to fix the issue if RS report to duty and crashed before it can put up a zk node. It is a very rare case. But The issue I metioned can happened more often(due to DNS, config, etc.) and have more severe consequence. So here I offer some solutions to discuss: 1. Revert HBASE-9593 from all branches, Andrew Purtell has reverted it in branch-0.98 2. Abort RS if master return a different name, otherwise SSH can't work properly 3. Master receive whatever servername reported by RS and don't change it. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-15431) A bunch of methods are hot and too big to be inlined
[ https://issues.apache.org/jira/browse/HBASE-15431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891723#comment-15891723 ] Andrew Purtell commented on HBASE-15431: Inlining increases locality of reference via the instruction cache, but once the cache is exceeded there are quickly diminishing returns. Hotspot's heuristics are pretty good and I think are unlikely to be improved upon for the general case. > A bunch of methods are hot and too big to be inlined > > > Key: HBASE-15431 > URL: https://issues.apache.org/jira/browse/HBASE-15431 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl > Attachments: hotMethods.txt > > > I ran HBase with "-XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions > -XX:+PrintInlining" and then looked for "hot method too big" log lines. > I'll attach a log of those messages. > I tried to increase -XX:FreqInlineSize to 1010 to inline all these methods > (as long as they're hot, but actually didn't see any improvement). > In all cases I primed the JVM to make sure the JVM gets a chance to profile > the methods and decide whether they're hot or not. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block
[ https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891720#comment-15891720 ] ramkrishna.s.vasudevan commented on HBASE-17623: I verified the patch. Looks great. bq.* @param data encoded bytes with header Is this the encoded bytes or the bytes with header that has to be encoded? Not your patch, {code} if (blockType == BlockType.DATA) { this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, userDataStream, baosInMemory.getBuffer(), blockType); blockType = dataBlockEncodingCtx.getBlockType(); } {code} Internally we write the int {code} Bytes.putInt(uncompressedBytesWithHeader, HConstants.HFILEBLOCK_HEADER_SIZE + DataBlockEncoding.ID_SIZE, state.unencodedDataSizeWritten ); {code} So are we sure that the baosInMemory has the place to write this int? > Reuse the bytes array when building the hfile block > --- > > Key: HBASE-17623 > URL: https://issues.apache.org/jira/browse/HBASE-17623 > Project: HBase > Issue Type: Improvement >Reporter: CHIA-PING TSAI >Assignee: CHIA-PING TSAI >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: after(snappy_hfilesize=5.04GB).png, > after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, > before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, > HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, > HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx > > > There are two improvements. > # The onDiskBlockBytesWithHeader should maintain a bytes array which can be > reused when building the hfile. > # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we > need to cache the block. > # If no block need to be cached, the uncompressedBlockBytesWithHeader will > never be created. > {code:title=HFileBlock.java|borderStyle=solid} > private void finishBlock() throws IOException { > if (blockType == BlockType.DATA) { > this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, > userDataStream, > baosInMemory.getBuffer(), blockType); > blockType = dataBlockEncodingCtx.getBlockType(); > } > userDataStream.flush(); > // This does an array copy, so it is safe to cache this byte array when > cache-on-write. > // Header is still the empty, 'dummy' header that is yet to be filled > out. > uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > prevOffset = prevOffsetByType[blockType.getId()]; > // We need to set state before we can package the block up for > cache-on-write. In a way, the > // block is ready, but not yet encoded or compressed. > state = State.BLOCK_READY; > if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) > { > onDiskBlockBytesWithHeader = dataBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } else { > onDiskBlockBytesWithHeader = defaultBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } > // Calculate how many bytes we need for checksum on the tail of the > block. > int numBytes = (int) ChecksumUtil.numBytes( > onDiskBlockBytesWithHeader.length, > fileContext.getBytesPerChecksum()); > // Put the header for the on disk bytes; header currently is > unfilled-out > putHeader(onDiskBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > // Set the header for the uncompressed bytes (for cache-on-write) -- > IFF different from > // onDiskBlockBytesWithHeader array. > if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) { > putHeader(uncompressedBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > } > if (onDiskChecksum.length != numBytes) { > onDiskChecksum = new byte[numBytes]; > } > ChecksumUtil.generateChecksums( > onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length, > onDiskChecksum, 0, fileContext.getChecksumType(), > fileContext.getBytesPerChecksum()); > }{code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17704) Regions stuck in FAILED_OPEN when HDFS blocks are missing
[ https://issues.apache.org/jira/browse/HBASE-17704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891710#comment-15891710 ] Mathias Herberts commented on HBASE-17704: -- Thanks Gary for this hint, when we upgrade to 1.4.0 or 2.0.0 we'll tweak this configuration parameter. > Regions stuck in FAILED_OPEN when HDFS blocks are missing > - > > Key: HBASE-17704 > URL: https://issues.apache.org/jira/browse/HBASE-17704 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.8 >Reporter: Mathias Herberts > > We recently experienced the loss of a whole rack (6 DNs + RS) in a 120 node > cluster. This lead to the regions which were present on the 6 RS which became > unavailable to be reassigned to live RSs. When attempting to open some of the > reassigned regions, some RS encountered missing blocks and issued "No live > nodes contain current block Block locations" putting the regions in state > FAILED_OPEN. > Once the disappeared DNs went back online, the regions were left in > FAILED_OPEN, needing a restart of all the affected RSs to solve the problem. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17669) Implement async mergeRegion/splitRegion methods.
[ https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891707#comment-15891707 ] Hadoop QA commented on HBASE-17669: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 58s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 31m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 14s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 151m 52s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.client.TestAsyncAdmin | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855542/HBASE-17669.v1.patch | | JIRA Issue | HBASE-17669 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux a8088eea1fe1 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 697a55a | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/5916/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/5916/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results |
[jira] [Commented] (HBASE-17495) TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails due to assertion error
[ https://issues.apache.org/jira/browse/HBASE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891689#comment-15891689 ] ramkrishna.s.vasudevan commented on HBASE-17495: Not able to generate failure even after running the script for 20 times. Also in the trunk build this test is not in the flaky tests list which means it has been consistent there? > TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails > due to assertion error > > > Key: HBASE-17495 > URL: https://issues.apache.org/jira/browse/HBASE-17495 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Priority: Critical > Attachments: 17495-dbg.txt, > 17495-testHRegionWithInMemoryFlush-output-2.0123, > testHRegionWithInMemoryFlush-flush-output.0123, > TestHRegionWithInMemoryFlush-out.0222.tar.gz, > TestHRegionWithInMemoryFlush-out.0301, > testHRegionWithInMemoryFlush-output.0119 > > > Looping through the test (based on commit > 76dc957f64fa38ce88694054db7dbf590f368ae7), I saw the following test failure: > {code} > testFlushCacheWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush) > Time elapsed: 0.53 sec <<< FAILURE! > java.lang.AssertionError: toggle=false i=940 ts=1484852861597 expected:<94> > but was:<92> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at > org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3533) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > {code} > See test output for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17338) Treat Cell data size under global memstore heap size only when that Cell can not be copied to MSLAB
[ https://issues.apache.org/jira/browse/HBASE-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891676#comment-15891676 ] Anoop Sam John commented on HBASE-17338: bq.Dumb question. dataSize is KV infrastructure + key content + value + trailing tags and sequenceid if any? i.e. the whole KV? And CellSize is infrastructure only or rather key+infrastructure? Not sure whether used dataSize/cellSize here and there. Its data size only which is the way as u said above. There is no cellSize as such. There is only cell heap size which is the heap size (total) occupied by the cell impl object bq.There is one global threshold whether data is onheap or offheap (I probably got this wrong?) Ya there is one global threshold in both cases. Moreover in case of off heap, there is a heap global threshold also. ie. By def 40% of xmx above which we will force flush. In case of offheap, we have to check this extra thing also or else there is possibility of global memstore size getting oversized and GC impacts/OOME. bq.We could probably but the direct memory would remain allocated until we restart. Sorry for not clear here. I dont mean ON/OFF MSLAB over the RS run time. Now we have a way to turn it ON at cluster level means at RS level. All the regions in this RS will use this MSLAB then. (Ya we know some cells wont get copied to MSLAB - increment/append) My Q was whether there is a way for turning off MSLAB usage for a specific table. Say in RS MSLAB is ON. But the regions from this particular table wont use MSLAB at all. > Treat Cell data size under global memstore heap size only when that Cell can > not be copied to MSLAB > --- > > Key: HBASE-17338 > URL: https://issues.apache.org/jira/browse/HBASE-17338 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-17338.patch, HBASE-17338_V2.patch, > HBASE-17338_V2.patch, HBASE-17338_V4.patch, HBASE-17338_V5.patch > > > We have only data size and heap overhead being tracked globally. Off heap > memstore works with off heap backed MSLAB pool. But a cell, when added to > memstore, not always getting copied to MSLAB. Append/Increment ops doing an > upsert, dont use MSLAB. Also based on the Cell size, we sometimes avoid > MSLAB copy. But now we track these cell data size also under the global > memstore data size which indicated off heap size in case of off heap > memstore. For global checks for flushes (against lower/upper watermark > levels), we check this size against max off heap memstore size. We do check > heap overhead against global heap memstore size (Defaults to 40% of xmx) But > for such cells the data size also should be accounted under the heap overhead. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891670#comment-15891670 ] stack commented on HBASE-17716: --- Hello [~karanmehta93] Can you say more about why the enum indirection route? What are we guarding against by doing this? It is hard for us to change metric naming. These are exported to be consumed by operators. We can't change the names easily without pissing off folks. Is the thought that we could have some indirection if we had enum such that phoenix could press on if we changed a metric name? Would we then have a state where phoenix might refer to a metric with one name but an operator looking at metrics exported by an hbase cluster would have a different name for a metric (if we ever changed the metric name)? Should we change all metrics to do this indirection? Thanks (Metric might be a better enum than MetricType given the metrics you list count different dimensions). > Formalize Scan Metric names > --- > > Key: HBASE-17716 > URL: https://issues.apache.org/jira/browse/HBASE-17716 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Karan Mehta >Assignee: Karan Mehta >Priority: Minor > Attachments: HBASE-17716.patch > > > HBase provides various metrics through the API's exposed by ScanMetrics > class. > The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix > Metrics API. Currently these metrics are referred via hard-coded strings, > which are not formal and can break the Phoenix API. Hence we need to refactor > the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block
[ https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891660#comment-15891660 ] CHIA-PING TSAI commented on HBASE-17623: Address [~ram_krish]'s comment. The experiment is shown below. - G1 - 400 GB data - no split - default compaction - v2 patch ||statistics||before||before(cache-on-write)||after||after(cache-on-write)|| |elapsed(s)|12191|12126|11581|11481| |young GC count|1931|3307|1823|2845| |young total GC time(s)|209|383|245|372| |old GC count|51|712|21|657| |old total GC time(s)|52|832|33|790| |total pause time(s)|212|411|226|342| > Reuse the bytes array when building the hfile block > --- > > Key: HBASE-17623 > URL: https://issues.apache.org/jira/browse/HBASE-17623 > Project: HBase > Issue Type: Improvement >Reporter: CHIA-PING TSAI >Assignee: CHIA-PING TSAI >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: after(snappy_hfilesize=5.04GB).png, > after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, > before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, > HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, > HBASE-17623.v1.patch, HBASE-17623.v2.patch, memory allocation measurement.xlsx > > > There are two improvements. > # The onDiskBlockBytesWithHeader should maintain a bytes array which can be > reused when building the hfile. > # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we > need to cache the block. > # If no block need to be cached, the uncompressedBlockBytesWithHeader will > never be created. > {code:title=HFileBlock.java|borderStyle=solid} > private void finishBlock() throws IOException { > if (blockType == BlockType.DATA) { > this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, > userDataStream, > baosInMemory.getBuffer(), blockType); > blockType = dataBlockEncodingCtx.getBlockType(); > } > userDataStream.flush(); > // This does an array copy, so it is safe to cache this byte array when > cache-on-write. > // Header is still the empty, 'dummy' header that is yet to be filled > out. > uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > prevOffset = prevOffsetByType[blockType.getId()]; > // We need to set state before we can package the block up for > cache-on-write. In a way, the > // block is ready, but not yet encoded or compressed. > state = State.BLOCK_READY; > if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) > { > onDiskBlockBytesWithHeader = dataBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } else { > onDiskBlockBytesWithHeader = defaultBlockEncodingCtx. > compressAndEncrypt(uncompressedBlockBytesWithHeader); > } > // Calculate how many bytes we need for checksum on the tail of the > block. > int numBytes = (int) ChecksumUtil.numBytes( > onDiskBlockBytesWithHeader.length, > fileContext.getBytesPerChecksum()); > // Put the header for the on disk bytes; header currently is > unfilled-out > putHeader(onDiskBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > // Set the header for the uncompressed bytes (for cache-on-write) -- > IFF different from > // onDiskBlockBytesWithHeader array. > if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) { > putHeader(uncompressedBlockBytesWithHeader, 0, > onDiskBlockBytesWithHeader.length + numBytes, > uncompressedBlockBytesWithHeader.length, > onDiskBlockBytesWithHeader.length); > } > if (onDiskChecksum.length != numBytes) { > onDiskChecksum = new byte[numBytes]; > } > ChecksumUtil.generateChecksums( > onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length, > onDiskChecksum, 0, fileContext.getChecksumType(), > fileContext.getBytesPerChecksum()); > }{code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17495) TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails due to assertion error
[ https://issues.apache.org/jira/browse/HBASE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-17495: --- Priority: Critical (was: Major) > TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails > due to assertion error > > > Key: HBASE-17495 > URL: https://issues.apache.org/jira/browse/HBASE-17495 > Project: HBase > Issue Type: Test >Reporter: Ted Yu >Priority: Critical > Attachments: 17495-dbg.txt, > 17495-testHRegionWithInMemoryFlush-output-2.0123, > testHRegionWithInMemoryFlush-flush-output.0123, > TestHRegionWithInMemoryFlush-out.0222.tar.gz, > TestHRegionWithInMemoryFlush-out.0301, > testHRegionWithInMemoryFlush-output.0119 > > > Looping through the test (based on commit > 76dc957f64fa38ce88694054db7dbf590f368ae7), I saw the following test failure: > {code} > testFlushCacheWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush) > Time elapsed: 0.53 sec <<< FAILURE! > java.lang.AssertionError: toggle=false i=940 ts=1484852861597 expected:<94> > but was:<92> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at > org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3533) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > {code} > See test output for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17495) TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails due to assertion error
[ https://issues.apache.org/jira/browse/HBASE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891647#comment-15891647 ] ramkrishna.s.vasudevan commented on HBASE-17495: I looked into the output. The above failure could be due to the other test testWritesWhileGetting. So overall the TestHRegionWithInMemoryFlush has some issues and depends on the test that fails. > TestHRegionWithInMemoryFlush#testFlushCacheWhileScanning intermittently fails > due to assertion error > > > Key: HBASE-17495 > URL: https://issues.apache.org/jira/browse/HBASE-17495 > Project: HBase > Issue Type: Test >Reporter: Ted Yu > Attachments: 17495-dbg.txt, > 17495-testHRegionWithInMemoryFlush-output-2.0123, > testHRegionWithInMemoryFlush-flush-output.0123, > TestHRegionWithInMemoryFlush-out.0222.tar.gz, > TestHRegionWithInMemoryFlush-out.0301, > testHRegionWithInMemoryFlush-output.0119 > > > Looping through the test (based on commit > 76dc957f64fa38ce88694054db7dbf590f368ae7), I saw the following test failure: > {code} > testFlushCacheWhileScanning(org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush) > Time elapsed: 0.53 sec <<< FAILURE! > java.lang.AssertionError: toggle=false i=940 ts=1484852861597 expected:<94> > but was:<92> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at > org.apache.hadoop.hbase.regionserver.TestHRegion.testFlushCacheWhileScanning(TestHRegion.java:3533) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > {code} > See test output for details. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891622#comment-15891622 ] Hadoop QA commented on HBASE-17710: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 4s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s {color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 14m 27s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 7s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 113m 11s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestHRegion | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f | | JIRA Patch URL |
[jira] [Updated] (HBASE-17669) Implement async mergeRegion/splitRegion methods.
[ https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-17669: - Attachment: HBASE-17669.v1.patch > Implement async mergeRegion/splitRegion methods. > > > Key: HBASE-17669 > URL: https://issues.apache.org/jira/browse/HBASE-17669 > Project: HBase > Issue Type: Sub-task > Components: Admin, asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-17669.v1.patch > > > RT -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17669) Implement async mergeRegion/splitRegion methods.
[ https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-17669: - Attachment: (was: HBASE-17669.v1.patch) > Implement async mergeRegion/splitRegion methods. > > > Key: HBASE-17669 > URL: https://issues.apache.org/jira/browse/HBASE-17669 > Project: HBase > Issue Type: Sub-task > Components: Admin, asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-17669.v1.patch > > > RT -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17600) Implement get/create/modify/delete/list namespace admin operations
[ https://issues.apache.org/jira/browse/HBASE-17600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891582#comment-15891582 ] Hadoop QA commented on HBASE-17600: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 31m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 8s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 157m 51s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855503/HBASE-17600.master.003.patch | | JIRA Issue | HBASE-17600 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux fff8212c475f 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/5912/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/5912/artifact/patchprocess/patch-unit-hbase-server.txt | |
[jira] [Updated] (HBASE-17646) Implement Async getRegion method
[ https://issues.apache.org/jira/browse/HBASE-17646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-17646: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to master. Thanks [~openinx] for the contributing. > Implement Async getRegion method > > > Key: HBASE-17646 > URL: https://issues.apache.org/jira/browse/HBASE-17646 > Project: HBase > Issue Type: Sub-task > Components: asyncclient >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Labels: asynchronous > Fix For: 2.0.0 > > Attachments: HBASE-17646.v1.patch, HBASE-17646.v2.patch, > HBASE-17646.v3.patch, HBASE-17646.v4.patch > > > There are some async admin APIs which depends on async getRegion method. > Such as : > 1. closeRegion. > 2. flushRegion. > 3. compactRegion. > 4. mergeRegion. > 5. splitRegion. > and so on . > So, implement async getRegion method first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17669) Implement async mergeRegion/splitRegion methods.
[ https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891573#comment-15891573 ] Zheng Hu commented on HBASE-17669: -- The patch depends on getRegion() method in HBASE-17646, Will put it to review board after HBASE-17646 committed into hbase-git. > Implement async mergeRegion/splitRegion methods. > > > Key: HBASE-17669 > URL: https://issues.apache.org/jira/browse/HBASE-17669 > Project: HBase > Issue Type: Sub-task > Components: Admin, asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-17669.v1.patch > > > RT -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17669) Implement async mergeRegion/splitRegion methods.
[ https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891571#comment-15891571 ] Hadoop QA commented on HBASE-17669: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} | {color:red} HBASE-17669 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855538/HBASE-17669.v1.patch | | JIRA Issue | HBASE-17669 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5915/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Implement async mergeRegion/splitRegion methods. > > > Key: HBASE-17669 > URL: https://issues.apache.org/jira/browse/HBASE-17669 > Project: HBase > Issue Type: Sub-task > Components: Admin, asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-17669.v1.patch > > > RT -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17669) Implement async mergeRegion/splitRegion methods.
[ https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-17669: - Status: Patch Available (was: Open) > Implement async mergeRegion/splitRegion methods. > > > Key: HBASE-17669 > URL: https://issues.apache.org/jira/browse/HBASE-17669 > Project: HBase > Issue Type: Sub-task > Components: Admin, asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-17669.v1.patch > > > RT -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17669) Implement async mergeRegion/splitRegion methods.
[ https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-17669: - Attachment: HBASE-17669.v1.patch > Implement async mergeRegion/splitRegion methods. > > > Key: HBASE-17669 > URL: https://issues.apache.org/jira/browse/HBASE-17669 > Project: HBase > Issue Type: Sub-task > Components: Admin, asyncclient, Client >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Fix For: 2.0.0 > > Attachments: HBASE-17669.v1.patch > > > RT -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891565#comment-15891565 ] Hadoop QA commented on HBASE-17716: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 6s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 7s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s {color} | {color:red} hbase-client generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 2s {color} | {color:red} hbase-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 17s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s {color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 154m 31s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.TestInterfaceAudienceAnnotations | | | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush | | | hadoop.hbase.mapreduce.TestTableMapReduce | | | hadoop.hbase.TestServerSideScanMetricsFromClientSide | | Timed out junit tests | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 | | | org.apache.hadoop.hbase.master.TestSplitLogManager | | | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer | | | org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush | | | org.apache.hadoop.hbase.master.TestRestartCluster | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855501/HBASE-17716.patch | | JIRA Issue | HBASE-17716 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 09c77bdcc01d 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016
[jira] [Commented] (HBASE-17655) Removing MemStoreScanner and SnapshotScanner
[ https://issues.apache.org/jira/browse/HBASE-17655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891561#comment-15891561 ] ramkrishna.s.vasudevan commented on HBASE-17655: If the comment from Anoop is fine to go and there is one comment in RB from me. If those are addressed/fixed then we can go for commit. Thanks. I can take up the commit to master branch once the above are resolved. > Removing MemStoreScanner and SnapshotScanner > > > Key: HBASE-17655 > URL: https://issues.apache.org/jira/browse/HBASE-17655 > Project: HBase > Issue Type: Improvement > Components: Scanners >Affects Versions: 2.0.0 >Reporter: Eshcar Hillel >Assignee: Eshcar Hillel > Attachments: HBASE-17655-V01.patch, HBASE-17655-V02.patch, > HBASE-17655-V03.patch, HBASE-17655-V04.patch, HBASE-17655-V05.patch, > HBASE-17655-V05.patch, HBASE-17655-V06.patch > > > With CompactingMemstore becoming the new default, a store comprises multiple > memory segments and not just 1-2. MemStoreScanner encapsulates the scanning > of segments in the memory part of the store. SnapshotScanner is used to scan > the snapshot segment upon flush to disk. > Having the logic of scanners scattered in multiple classes (StoreScanner, > SegmentScanner, MemStoreScanner, SnapshotScanner) makes maintainance and > debugging challenging tasks, not always for a good reason. > For example, MemStoreScanner has a KeyValueHeap (KVH). When creating the > store scanner which also has a KVH, this makes a KVH inside a KVH. Reasoning > about the correctness of the methods supported by the scanner (seek, next, > hasNext, peek, etc.) is hard and debugging them is cumbersome. > In addition, by removing the MemStoreScanner layer we allow store scanner to > filter out each one of the memory scanners instead of either taking them all > (in most cases) or discarding them all (rarely). > SnapshotScanner is a simplified version of SegmentScanner as it is used only > in a specific context. However it is an additional implementation of the same > logic with no real advantage of improved performance. > Therefore, I suggest removing both MemStoreScanner and SnapshotScanner. The > code is adjusted to handle the list of segment scanners they encapsulate. > This fits well with the current code since in most cases at some point a list > of scanner is expected, so passing the actual list of segment scanners is > more natural than wrapping a single (high level) scanner with > Collections.singeltonList(...). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-14375) define public API for spark integration module
[ https://issues.apache.org/jira/browse/HBASE-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891531#comment-15891531 ] Jerry He commented on HBASE-14375: -- You don't we can make the DefaultSource private as it is? > define public API for spark integration module > -- > > Key: HBASE-14375 > URL: https://issues.apache.org/jira/browse/HBASE-14375 > Project: HBase > Issue Type: Task > Components: spark >Reporter: Sean Busbey >Assignee: Jerry He >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-14375-v1.patch > > > before we can put the spark integration module into a release, we need to > annotate its public api surface. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17646) Implement Async getRegion method
[ https://issues.apache.org/jira/browse/HBASE-17646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891504#comment-15891504 ] Duo Zhang commented on HBASE-17646: --- The failed UT is not related. Will commit shortly. > Implement Async getRegion method > > > Key: HBASE-17646 > URL: https://issues.apache.org/jira/browse/HBASE-17646 > Project: HBase > Issue Type: Sub-task > Components: asyncclient >Affects Versions: 2.0.0 >Reporter: Zheng Hu >Assignee: Zheng Hu > Labels: asynchronous > Fix For: 2.0.0 > > Attachments: HBASE-17646.v1.patch, HBASE-17646.v2.patch, > HBASE-17646.v3.patch, HBASE-17646.v4.patch > > > There are some async admin APIs which depends on async getRegion method. > Such as : > 1. closeRegion. > 2. flushRegion. > 3. compactRegion. > 4. mergeRegion. > 5. splitRegion. > and so on . > So, implement async getRegion method first. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17710: --- Attachment: 17710.branch-1.v2.txt > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Fix For: 2.0.0, 1.4.0 > > Attachments: 17710.branch-1.v1.txt, 17710.branch-1.v2.txt, > 17710.branch-1.v2.txt, 17710.v1.txt, 17710.v2.txt, 17710.v3.txt, 17710.v4.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777. > Because HFileSystem which inherits FileSystem doesn't override the method, > when we call HFileSystem.mkdirs(Path f), it tries to create a
[jira] [Commented] (HBASE-17465) [C++] implement request retry mechanism over RPC
[ https://issues.apache.org/jira/browse/HBASE-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891461#comment-15891461 ] Xiaobing Zhou commented on HBASE-17465: --- posted v9 #fixed broken promise #removed AsyncConnection #rename ElapsedMs() to ElapsedMillis() #changed ./hbase-protocol/src/main/protobuf/Client.proto #removed client-test.h > [C++] implement request retry mechanism over RPC > > > Key: HBASE-17465 > URL: https://issues.apache.org/jira/browse/HBASE-17465 > Project: HBase > Issue Type: Sub-task >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HBASE-17465-HBASE-14850.000.patch, > HBASE-17465-HBASE-14850.001.patch, HBASE-17465-HBASE-14850.002.patch, > HBASE-17465-HBASE-14850.003.patch, HBASE-17465-HBASE-14850.004.patch, > HBASE-17465-HBASE-14850.005.patch, HBASE-17465-HBASE-14850.006.patch, > HBASE-17465-HBASE-14850.007.patch, HBASE-17465-HBASE-14850.008.patch, > HBASE-17465-HBASE-14850.009.patch > > > HBASE-17051 implemented RPC layer. Requests retries will make system > reliable. This JIRA proposes adding it, which corresponds to similar > implementation in SingleRequestCallerBuilder (or BatchCallerBuilder, > ScanSingleRegionCallerBuilder, SmallScanCallerBuilder, etc.) and > AsyncSingleRequestRpcRetryingCaller. As a bonus, retry should be more > generic, decoupled with HRegionLocation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17465) [C++] implement request retry mechanism over RPC
[ https://issues.apache.org/jira/browse/HBASE-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HBASE-17465: -- Attachment: HBASE-17465-HBASE-14850.009.patch > [C++] implement request retry mechanism over RPC > > > Key: HBASE-17465 > URL: https://issues.apache.org/jira/browse/HBASE-17465 > Project: HBase > Issue Type: Sub-task >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HBASE-17465-HBASE-14850.000.patch, > HBASE-17465-HBASE-14850.001.patch, HBASE-17465-HBASE-14850.002.patch, > HBASE-17465-HBASE-14850.003.patch, HBASE-17465-HBASE-14850.004.patch, > HBASE-17465-HBASE-14850.005.patch, HBASE-17465-HBASE-14850.006.patch, > HBASE-17465-HBASE-14850.007.patch, HBASE-17465-HBASE-14850.008.patch, > HBASE-17465-HBASE-14850.009.patch > > > HBASE-17051 implemented RPC layer. Requests retries will make system > reliable. This JIRA proposes adding it, which corresponds to similar > implementation in SingleRequestCallerBuilder (or BatchCallerBuilder, > ScanSingleRegionCallerBuilder, SmallScanCallerBuilder, etc.) and > AsyncSingleRequestRpcRetryingCaller. As a bonus, retry should be more > generic, decoupled with HRegionLocation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HBASE-17465) [C++] implement request retry mechanism over RPC
[ https://issues.apache.org/jira/browse/HBASE-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891461#comment-15891461 ] Xiaobing Zhou edited comment on HBASE-17465 at 3/2/17 1:56 AM: --- posted v9 # fixed broken promise # removed AsyncConnection # rename ElapsedMs() to ElapsedMillis() # changed ./hbase-protocol/src/main/protobuf/Client.proto # removed client-test.h was (Author: xiaobingo): posted v9 #fixed broken promise #removed AsyncConnection #rename ElapsedMs() to ElapsedMillis() #changed ./hbase-protocol/src/main/protobuf/Client.proto #removed client-test.h > [C++] implement request retry mechanism over RPC > > > Key: HBASE-17465 > URL: https://issues.apache.org/jira/browse/HBASE-17465 > Project: HBase > Issue Type: Sub-task >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HBASE-17465-HBASE-14850.000.patch, > HBASE-17465-HBASE-14850.001.patch, HBASE-17465-HBASE-14850.002.patch, > HBASE-17465-HBASE-14850.003.patch, HBASE-17465-HBASE-14850.004.patch, > HBASE-17465-HBASE-14850.005.patch, HBASE-17465-HBASE-14850.006.patch, > HBASE-17465-HBASE-14850.007.patch, HBASE-17465-HBASE-14850.008.patch, > HBASE-17465-HBASE-14850.009.patch > > > HBASE-17051 implemented RPC layer. Requests retries will make system > reliable. This JIRA proposes adding it, which corresponds to similar > implementation in SingleRequestCallerBuilder (or BatchCallerBuilder, > ScanSingleRegionCallerBuilder, SmallScanCallerBuilder, etc.) and > AsyncSingleRequestRpcRetryingCaller. As a bonus, retry should be more > generic, decoupled with HRegionLocation. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17579) Backport HBASE-16302 to 1.3.1
[ https://issues.apache.org/jira/browse/HBASE-17579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891451#comment-15891451 ] Hadoop QA commented on HBASE-17579: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 14s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s {color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s {color} | {color:green} branch-1.3 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s {color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s {color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s {color} | {color:red} hbase-hadoop2-compat in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s {color} | {color:red} hbase-hadoop2-compat in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 11s {color} | {color:red} hbase-hadoop2-compat in the patch failed with JDK v1.7.0_80. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 40s {color} | {color:red} The patch causes 14 errors with Hadoop v2.4.0. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 20s {color} | {color:red} The patch causes 14 errors with Hadoop v2.4.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 59s {color} | {color:red} The patch causes 14 errors with Hadoop v2.5.0. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 40s {color} | {color:red} The patch causes 14 errors with Hadoop v2.5.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 20s {color} | {color:red} The patch causes 14 errors with Hadoop v2.5.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 59s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 37s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 15s {color} | {color:red} The patch causes 14 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 54s {color} | {color:red} The patch causes 14 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 10s {color} | {color:red} hbase-hadoop2-compat in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s {color} | {color:red} hbase-hadoop2-compat in the patch failed. {color} | |
[jira] [Commented] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891452#comment-15891452 ] Ted Yu commented on HBASE-17716: Which release is this targeting ? For 1.x, can the method(s) taking String counterName be kept in ServerSideScanMetrics (the String can be obtained from enum) ? > Formalize Scan Metric names > --- > > Key: HBASE-17716 > URL: https://issues.apache.org/jira/browse/HBASE-17716 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Karan Mehta >Assignee: Karan Mehta >Priority: Minor > Attachments: HBASE-17716.patch > > > HBase provides various metrics through the API's exposed by ScanMetrics > class. > The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix > Metrics API. Currently these metrics are referred via hard-coded strings, > which are not formal and can break the Phoenix API. Hence we need to refactor > the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891443#comment-15891443 ] Ted Yu commented on HBASE-17710: {code} Tests in error: TestAcidGuarantees.testMobGetAtomicity:435->runTestAtomicity:388 ? Runtime Def... {code} Not related to patch. > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Fix For: 2.0.0, 1.4.0 > > Attachments: 17710.branch-1.v1.txt, 17710.branch-1.v2.txt, > 17710.v1.txt, 17710.v2.txt, 17710.v3.txt, 17710.v4.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777. > Because HFileSystem which
[jira] [Commented] (HBASE-15431) A bunch of methods are hot and too big to be inlined
[ https://issues.apache.org/jira/browse/HBASE-15431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891431#comment-15891431 ] Vincent Poon commented on HBASE-15431: -- For anyone still following this, you can actually force inlining with -XX:CompileCommandFile=/home/hbase/hotspot_compiler And then in the file put lines like: inline org.apache.hadoop.hbase.io.hfile.HFileReaderV3$ScannerV3::blockSeek inline org.apache.hadoop.hbase.regionserver.RSRpcServices::scan Unfortunately, after forcing inline of all the "hot method too big" methods, I didn't notice any appreciable performance improvement in benchmarks using PerformanceEvaluation scan/randomRead. Perhaps targeting only a few specific methods might be better, but as Andrew pointed out, it's trial and error. > A bunch of methods are hot and too big to be inlined > > > Key: HBASE-15431 > URL: https://issues.apache.org/jira/browse/HBASE-15431 > Project: HBase > Issue Type: Bug >Reporter: Lars Hofhansl > Attachments: hotMethods.txt > > > I ran HBase with "-XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions > -XX:+PrintInlining" and then looked for "hot method too big" log lines. > I'll attach a log of those messages. > I tried to increase -XX:FreqInlineSize to 1010 to inline all these methods > (as long as they're hot, but actually didn't see any improvement). > In all cases I primed the JVM to make sure the JVM gets a chance to profile > the methods and decide whether they're hot or not. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16755) Honor flush policy under global memstore pressure
[ https://issues.apache.org/jira/browse/HBASE-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891429#comment-15891429 ] Duo Zhang commented on HBASE-16755: --- And there is another minor reason why we set force to true is that, we select regions based on their total memstore size. If we do not flush all the contents in memstore, then the best candidate may not be the one which has the maximum memstore size. Maybe we could introduce new methods in FlushPolicy to find out the lfushable size and select regions based on that value. Of course this is only a nice to have, can do it later if anyone has interest. On the patch, as now we rely on the FlushPolicy to always return something to flush, maybe we should add some checks in the code? We may introduce new flush policies in the future. we need to make sure that they also follow the rule. Thanks. > Honor flush policy under global memstore pressure > - > > Key: HBASE-16755 > URL: https://issues.apache.org/jira/browse/HBASE-16755 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 1.3.1 > > Attachments: HBASE-16755.v0.patch > > > When global memstore reaches the low water mark, we pick the best flushable > region and flush all column families for it. This is a suboptimal approach in > the sense that it leads to an unnecessarily high file creation rate and IO > amplification due to compactions. We should still try to honor the underlying > FlushPolicy. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891425#comment-15891425 ] Samarth Jain edited comment on HBASE-17716 at 3/2/17 1:35 AM: -- [~tedyu] - I see that the ServerSideScanMetrics class is marked as @InterfaceStability.Evolving. Would it be a bad thing if we break compatibility for the next minor release? Today, the way we have exposed the setCounter() api, we are letting users supply random counter names. Such random metrics wouldn't really be of use since the code would never update them. So IMHO it is better to enforce metric types via enum. If we want it for older versions of HBase, I guess we can just have constant strings defined in ScanMetrics or ServerSideScanMetrics classes. was (Author: samarthjain): [~tedyu] - I see that the ServerSideScanMetrics class is marked as @InterfaceStability.Evolving. Would it be a bad thing if we break compatibility for the next minor release? Today, the way we have exposed the setCounter() api, we are letting users supply random counter names. Such random metrics wouldn't really be of use since the code would never update them. So IMHO it is better to enforce metric types via enum. > Formalize Scan Metric names > --- > > Key: HBASE-17716 > URL: https://issues.apache.org/jira/browse/HBASE-17716 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Karan Mehta >Assignee: Karan Mehta >Priority: Minor > Attachments: HBASE-17716.patch > > > HBase provides various metrics through the API's exposed by ScanMetrics > class. > The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix > Metrics API. Currently these metrics are referred via hard-coded strings, > which are not formal and can break the Phoenix API. Hence we need to refactor > the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891425#comment-15891425 ] Samarth Jain commented on HBASE-17716: -- [~tedyu] - I see that the ServerSideScanMetrics class is marked as @InterfaceStability.Evolving. Would it be a bad thing if we break compatibility for the next minor release? Today, the way we have exposed the setCounter() api, we are letting users supply random counter names. Such random metrics wouldn't really be of use since the code would never update them. So IMHO it is better to enforce metric types via enum. > Formalize Scan Metric names > --- > > Key: HBASE-17716 > URL: https://issues.apache.org/jira/browse/HBASE-17716 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Karan Mehta >Assignee: Karan Mehta >Priority: Minor > Attachments: HBASE-17716.patch > > > HBase provides various metrics through the API's exposed by ScanMetrics > class. > The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix > Metrics API. Currently these metrics are referred via hard-coded strings, > which are not formal and can break the Phoenix API. Hence we need to refactor > the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17579) Backport HBASE-16302 to 1.3.1
[ https://issues.apache.org/jira/browse/HBASE-17579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashu Pachauri updated HBASE-17579: -- Attachment: HBASE-17579.branch-1.3.002.patch > Backport HBASE-16302 to 1.3.1 > - > > Key: HBASE-17579 > URL: https://issues.apache.org/jira/browse/HBASE-17579 > Project: HBase > Issue Type: Improvement > Components: Replication >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 1.3.1 > > Attachments: HBASE-17579.branch-1.3.001.patch, > HBASE-17579.branch-1.3.002.patch > > > This is a simple enough change to be included in 1.3.1, and replication > monitoring essentially breaks without this change. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891415#comment-15891415 ] Hadoop QA commented on HBASE-17707: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 34s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 127m 43s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855486/HBASE-17707-04.patch | | JIRA Issue | HBASE-17707 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 31179cbd1682 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5909/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5909/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891399#comment-15891399 ] Hadoop QA commented on HBASE-17710: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 5s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 32m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 15s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 157m 12s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855478/17710.v4.txt | | JIRA Issue | HBASE-17710 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux beeb2c9e6e20 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/5907/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5907/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5907/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically
[jira] [Updated] (HBASE-17600) Implement get/create/modify/delete/list namespace admin operations
[ https://issues.apache.org/jira/browse/HBASE-17600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-17600: --- Attachment: HBASE-17600.master.003.patch Try to trigger Hadoop QA again. > Implement get/create/modify/delete/list namespace admin operations > -- > > Key: HBASE-17600 > URL: https://issues.apache.org/jira/browse/HBASE-17600 > Project: HBase > Issue Type: Sub-task > Components: Client >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17600.master.001.patch, > HBASE-17600.master.002.patch, HBASE-17600.master.003.patch, > HBASE-17600.master.003.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17710: --- Hadoop Flags: Reviewed Fix Version/s: 1.4.0 2.0.0 > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Fix For: 2.0.0, 1.4.0 > > Attachments: 17710.branch-1.v1.txt, 17710.branch-1.v2.txt, > 17710.v1.txt, 17710.v2.txt, 17710.v3.txt, 17710.v4.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777. > Because HFileSystem which inherits FileSystem doesn't override the method, > when we call HFileSystem.mkdirs(Path f), it
[jira] [Commented] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891389#comment-15891389 ] Ted Yu commented on HBASE-17716: Add license header to MetricType.java {code} 11 REGIONS_SCANNED("Number of regions"), {code} "Number of regions" -> "Number of regions scanned" {code} 67public void setCounter(MetricType counterName, long value) { {code} Changes in ServerSideScanMetrics of the above form make this an incompatible change. > Formalize Scan Metric names > --- > > Key: HBASE-17716 > URL: https://issues.apache.org/jira/browse/HBASE-17716 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Karan Mehta >Assignee: Karan Mehta >Priority: Minor > Attachments: HBASE-17716.patch > > > HBase provides various metrics through the API's exposed by ScanMetrics > class. > The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix > Metrics API. Currently these metrics are referred via hard-coded strings, > which are not formal and can break the Phoenix API. Hence we need to refactor > the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891383#comment-15891383 ] Hadoop QA commented on HBASE-17710: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 4s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s {color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s {color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 14m 0s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s {color} | {color:green} the patch passed with JDK v1.8.0_121 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 32s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 117m 34s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.client.TestHCM | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f | | JIRA Patch URL |
[jira] [Commented] (HBASE-17717) Incorrect ZK ACL set for HBase superuser
[ https://issues.apache.org/jira/browse/HBASE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891381#comment-15891381 ] Enis Soztutar commented on HBASE-17717: --- The documentation on the zookeeper side is not explicit unfortunately. So it only replaces {{auth}} with {{sasl}} iff the user principal matches with the current user? If so +1. > Incorrect ZK ACL set for HBase superuser > > > Key: HBASE-17717 > URL: https://issues.apache.org/jira/browse/HBASE-17717 > Project: HBase > Issue Type: Bug > Components: security, Zookeeper >Reporter: Shreya Bhat >Assignee: Josh Elser > Fix For: 2.0.0, 1.3.1, 1.1.10, 1.2.6 > > Attachments: HBASE-17717.001.patch > > > Shreya was doing some testing of a deploy of HBase, verifying that the ZK > ACLs were actually set as we expect (yay, security). > She noticed that, in some cases, we were seeing multiple ACLs for the same > user. > {noformat} > 'world,'anyone > : r > 'sasl,'hbase > : cdrwa > 'sasl,'hbase > : cdrwa > {noformat} > After digging into this (and some insight from the mighty [~enis]), we > realized that this was happening because of an overridden value for > {{hbase.superuser}}. However, the ACL value doesn't match what we'd expect to > see (as hbase.superuser was set to {{cstm-hbase}}). > After digging into this code, it seems like the {{auth}} ACL scheme in > ZooKeeper does not work as we expect. > {code} > if (superUser != null) { > acls.add(new ACL(Perms.ALL, new Id("auth", superUser))); > } > {code} > In the above, the {{"auth"}} scheme ignores any provided "subject" in the > {{Id}} object. It *only* considers the authentication of the current > connection. As such, our usage of this never actually sets the ACL for the > superuser correctly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karan Mehta updated HBASE-17716: Status: Patch Available (was: Open) > Formalize Scan Metric names > --- > > Key: HBASE-17716 > URL: https://issues.apache.org/jira/browse/HBASE-17716 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Karan Mehta >Assignee: Karan Mehta >Priority: Minor > Attachments: HBASE-17716.patch > > > HBase provides various metrics through the API's exposed by ScanMetrics > class. > The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix > Metrics API. Currently these metrics are referred via hard-coded strings, > which are not formal and can break the Phoenix API. Hence we need to refactor > the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17716) Formalize Scan Metric names
[ https://issues.apache.org/jira/browse/HBASE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karan Mehta updated HBASE-17716: Attachment: HBASE-17716.patch > Formalize Scan Metric names > --- > > Key: HBASE-17716 > URL: https://issues.apache.org/jira/browse/HBASE-17716 > Project: HBase > Issue Type: Bug > Components: metrics >Reporter: Karan Mehta >Assignee: Karan Mehta >Priority: Minor > Attachments: HBASE-17716.patch > > > HBase provides various metrics through the API's exposed by ScanMetrics > class. > The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix > Metrics API. Currently these metrics are referred via hard-coded strings, > which are not formal and can break the Phoenix API. Hence we need to refactor > the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17717) Incorrect ZK ACL set for HBase superuser
[ https://issues.apache.org/jira/browse/HBASE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891364#comment-15891364 ] Hadoop QA commented on HBASE-17717: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 33m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 7s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 21s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855494/HBASE-17717.001.patch | | JIRA Issue | HBASE-17717 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 47cbcbb521f5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5910/testReport/ | | modules | C: hbase-client U: hbase-client | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5910/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Incorrect ZK ACL set for HBase superuser > > > Key: HBASE-17717 > URL: https://issues.apache.org/jira/browse/HBASE-17717 > Project: HBase > Issue Type: Bug > Components: security, Zookeeper >Reporter: Shreya Bhat >Assignee: Josh Elser > Fix For: 2.0.0, 1.3.1, 1.1.10, 1.2.6 > > Attachments: HBASE-17717.001.patch > > > Shreya was doing some
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891361#comment-15891361 ] Hadoop QA commented on HBASE-17707: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 10s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 101m 2s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 142m 3s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.client.TestAsyncNonMetaRegionLocator | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855464/HBASE-17707-03.patch | | JIRA Issue | HBASE-17707 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 7f25ce6b9285 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/5906/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/5906/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5906/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5906/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > New More Accurate TableSkew Balancer/Generator > -- > > Key:
[jira] [Commented] (HBASE-17717) Incorrect ZK ACL set for HBase superuser
[ https://issues.apache.org/jira/browse/HBASE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891321#comment-15891321 ] Josh Elser commented on HBASE-17717: Also, forgot to mention that I did a little testing with a real cluster and was able to observe the ACLs correcting themselves on cluster restart (given the current value of {{hbase.superuser}}). > Incorrect ZK ACL set for HBase superuser > > > Key: HBASE-17717 > URL: https://issues.apache.org/jira/browse/HBASE-17717 > Project: HBase > Issue Type: Bug > Components: security, Zookeeper >Reporter: Shreya Bhat >Assignee: Josh Elser > Fix For: 2.0.0, 1.3.1, 1.1.10, 1.2.6 > > Attachments: HBASE-17717.001.patch > > > Shreya was doing some testing of a deploy of HBase, verifying that the ZK > ACLs were actually set as we expect (yay, security). > She noticed that, in some cases, we were seeing multiple ACLs for the same > user. > {noformat} > 'world,'anyone > : r > 'sasl,'hbase > : cdrwa > 'sasl,'hbase > : cdrwa > {noformat} > After digging into this (and some insight from the mighty [~enis]), we > realized that this was happening because of an overridden value for > {{hbase.superuser}}. However, the ACL value doesn't match what we'd expect to > see (as hbase.superuser was set to {{cstm-hbase}}). > After digging into this code, it seems like the {{auth}} ACL scheme in > ZooKeeper does not work as we expect. > {code} > if (superUser != null) { > acls.add(new ACL(Perms.ALL, new Id("auth", superUser))); > } > {code} > In the above, the {{"auth"}} scheme ignores any provided "subject" in the > {{Id}} object. It *only* considers the authentication of the current > connection. As such, our usage of this never actually sets the ACL for the > superuser correctly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17717) Incorrect ZK ACL set for HBase superuser
[ https://issues.apache.org/jira/browse/HBASE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-17717: --- Attachment: HBASE-17717.001.patch .001 Switches the "auth" ACL scheme to "sasl" (under the assumption that we're in "secure" mode anyways) and expands {{TestZKUtil}} to include a few more cases. > Incorrect ZK ACL set for HBase superuser > > > Key: HBASE-17717 > URL: https://issues.apache.org/jira/browse/HBASE-17717 > Project: HBase > Issue Type: Bug > Components: security, Zookeeper >Reporter: Shreya Bhat >Assignee: Josh Elser > Fix For: 2.0.0, 1.3.1, 1.1.10, 1.2.6 > > Attachments: HBASE-17717.001.patch > > > Shreya was doing some testing of a deploy of HBase, verifying that the ZK > ACLs were actually set as we expect (yay, security). > She noticed that, in some cases, we were seeing multiple ACLs for the same > user. > {noformat} > 'world,'anyone > : r > 'sasl,'hbase > : cdrwa > 'sasl,'hbase > : cdrwa > {noformat} > After digging into this (and some insight from the mighty [~enis]), we > realized that this was happening because of an overridden value for > {{hbase.superuser}}. However, the ACL value doesn't match what we'd expect to > see (as hbase.superuser was set to {{cstm-hbase}}). > After digging into this code, it seems like the {{auth}} ACL scheme in > ZooKeeper does not work as we expect. > {code} > if (superUser != null) { > acls.add(new ACL(Perms.ALL, new Id("auth", superUser))); > } > {code} > In the above, the {{"auth"}} scheme ignores any provided "subject" in the > {{Id}} object. It *only* considers the authentication of the current > connection. As such, our usage of this never actually sets the ACL for the > superuser correctly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-14375) define public API for spark integration module
[ https://issues.apache.org/jira/browse/HBASE-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891313#comment-15891313 ] Jerry He commented on HBASE-14375: -- Regarding the DefaultSource class, if I understand it correctly, for the data source format 'org.apache.hadoop.hbase.spark' we give to Spark SQL, Spark SQL will automatically look for 'DefaultSource' as the implementation class in the same package. This can be changed with a call to Spark SQL 'DataSourceRegister' -- an improvement we can do later. > define public API for spark integration module > -- > > Key: HBASE-14375 > URL: https://issues.apache.org/jira/browse/HBASE-14375 > Project: HBase > Issue Type: Task > Components: spark >Reporter: Sean Busbey >Assignee: Jerry He >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-14375-v1.patch > > > before we can put the spark integration module into a release, we need to > annotate its public api surface. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17717) Incorrect ZK ACL set for HBase superuser
[ https://issues.apache.org/jira/browse/HBASE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-17717: --- Status: Patch Available (was: Open) > Incorrect ZK ACL set for HBase superuser > > > Key: HBASE-17717 > URL: https://issues.apache.org/jira/browse/HBASE-17717 > Project: HBase > Issue Type: Bug > Components: security, Zookeeper >Reporter: Shreya Bhat >Assignee: Josh Elser > Fix For: 2.0.0, 1.3.1, 1.1.10, 1.2.6 > > Attachments: HBASE-17717.001.patch > > > Shreya was doing some testing of a deploy of HBase, verifying that the ZK > ACLs were actually set as we expect (yay, security). > She noticed that, in some cases, we were seeing multiple ACLs for the same > user. > {noformat} > 'world,'anyone > : r > 'sasl,'hbase > : cdrwa > 'sasl,'hbase > : cdrwa > {noformat} > After digging into this (and some insight from the mighty [~enis]), we > realized that this was happening because of an overridden value for > {{hbase.superuser}}. However, the ACL value doesn't match what we'd expect to > see (as hbase.superuser was set to {{cstm-hbase}}). > After digging into this code, it seems like the {{auth}} ACL scheme in > ZooKeeper does not work as we expect. > {code} > if (superUser != null) { > acls.add(new ACL(Perms.ALL, new Id("auth", superUser))); > } > {code} > In the above, the {{"auth"}} scheme ignores any provided "subject" in the > {{Id}} object. It *only* considers the authentication of the current > connection. As such, our usage of this never actually sets the ACL for the > superuser correctly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17714) Client heartbeats seems to be broken
[ https://issues.apache.org/jira/browse/HBASE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891301#comment-15891301 ] Samarth Jain commented on HBASE-17714: -- Thanks for the investigation, [~apurtell]. I will make config changes in the test to increase the frequency of heartbeat checks and to see if enabling renewing leases would help. For the latter, my guess is that it wouldn't help because the call to renew lease is synchronized from the client side and would be blocked till scanner.next() returns. > Client heartbeats seems to be broken > > > Key: HBASE-17714 > URL: https://issues.apache.org/jira/browse/HBASE-17714 > Project: HBase > Issue Type: Bug >Reporter: Samarth Jain > > We have a test in Phoenix where we introduce an artificial sleep of 2 times > the RPC timeout in preScannerNext() hook of a co-processor. > {code} > public static class SleepingRegionObserver extends SimpleRegionObserver { > public SleepingRegionObserver() {} > > @Override > public boolean preScannerNext(final > ObserverContext c, > final InternalScanner s, final List results, > final int limit, final boolean hasMore) throws IOException { > try { > if (SLEEP_NOW && > c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().equals(TABLE_NAME)) > { > Thread.sleep(RPC_TIMEOUT * 2); > } > } catch (InterruptedException e) { > throw new IOException(e); > } > return super.preScannerNext(c, s, results, limit, hasMore); > } > } > {code} > This test was passing fine till 1.1.3 but started failing sometime before > 1.1.9 with an OutOfOrderScannerException. See PHOENIX-3702. [~lhofhansl] > mentioned that we have client heartbeats enabled and that should prevent us > from running into issues like this. FYI, this test fails with 1.2.3 version > of HBase too. > CC [~apurtell], [~jamestaylor] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891295#comment-15891295 ] Esteban Gutierrez commented on HBASE-17710: --- +1 I'm ok with this the fix for now. > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Attachments: 17710.branch-1.v1.txt, 17710.branch-1.v2.txt, > 17710.v1.txt, 17710.v2.txt, 17710.v3.txt, 17710.v4.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777. > Because HFileSystem which inherits FileSystem doesn't override the method, > when we call HFileSystem.mkdirs(Path f), it tries to create a
[jira] [Commented] (HBASE-17704) Regions stuck in FAILED_OPEN when HDFS blocks are missing
[ https://issues.apache.org/jira/browse/HBASE-17704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891281#comment-15891281 ] Gary Helmling commented on HBASE-17704: --- So HBASE-16209 added a backoff policy for retries of region open, without which regions would go into FAILED_OPEN quickly. So maybe all that's needed is bump up the configuration for maximum attempts ("hbase.assignment.maximum.attempts") to Integer.MAX_VALUE? > Regions stuck in FAILED_OPEN when HDFS blocks are missing > - > > Key: HBASE-17704 > URL: https://issues.apache.org/jira/browse/HBASE-17704 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.8 >Reporter: Mathias Herberts > > We recently experienced the loss of a whole rack (6 DNs + RS) in a 120 node > cluster. This lead to the regions which were present on the 6 RS which became > unavailable to be reassigned to live RSs. When attempting to open some of the > reassigned regions, some RS encountered missing blocks and issued "No live > nodes contain current block Block locations" putting the regions in state > FAILED_OPEN. > Once the disappeared DNs went back online, the regions were left in > FAILED_OPEN, needing a restart of all the affected RSs to solve the problem. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17717) Incorrect ZK ACL set for HBase superuser
Josh Elser created HBASE-17717: -- Summary: Incorrect ZK ACL set for HBase superuser Key: HBASE-17717 URL: https://issues.apache.org/jira/browse/HBASE-17717 Project: HBase Issue Type: Bug Components: security, Zookeeper Reporter: Shreya Bhat Assignee: Josh Elser Fix For: 2.0.0, 1.3.1, 1.1.10, 1.2.6 Shreya was doing some testing of a deploy of HBase, verifying that the ZK ACLs were actually set as we expect (yay, security). She noticed that, in some cases, we were seeing multiple ACLs for the same user. {noformat} 'world,'anyone : r 'sasl,'hbase : cdrwa 'sasl,'hbase : cdrwa {noformat} After digging into this (and some insight from the mighty [~enis]), we realized that this was happening because of an overridden value for {{hbase.superuser}}. However, the ACL value doesn't match what we'd expect to see (as hbase.superuser was set to {{cstm-hbase}}). After digging into this code, it seems like the {{auth}} ACL scheme in ZooKeeper does not work as we expect. {code} if (superUser != null) { acls.add(new ACL(Perms.ALL, new Id("auth", superUser))); } {code} In the above, the {{"auth"}} scheme ignores any provided "subject" in the {{Id}} object. It *only* considers the authentication of the current connection. As such, our usage of this never actually sets the ACL for the superuser correctly. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17714) Client heartbeats seems to be broken
[ https://issues.apache.org/jira/browse/HBASE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891272#comment-15891272 ] Andrew Purtell commented on HBASE-17714: The release notes on HBASE-13090 say {quote} To ensure that timeout checks do not occur too often (which would hurt the performance of scans), the configuration "hbase.cells.scanned.per.heartbeat.check" has been introduced. This configuration controls how often System.currentTimeMillis() is called to update the progress towards the time limit. Currently, the default value of this configuration value is 1. {quote} > Client heartbeats seems to be broken > > > Key: HBASE-17714 > URL: https://issues.apache.org/jira/browse/HBASE-17714 > Project: HBase > Issue Type: Bug >Reporter: Samarth Jain > > We have a test in Phoenix where we introduce an artificial sleep of 2 times > the RPC timeout in preScannerNext() hook of a co-processor. > {code} > public static class SleepingRegionObserver extends SimpleRegionObserver { > public SleepingRegionObserver() {} > > @Override > public boolean preScannerNext(final > ObserverContext c, > final InternalScanner s, final List results, > final int limit, final boolean hasMore) throws IOException { > try { > if (SLEEP_NOW && > c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().equals(TABLE_NAME)) > { > Thread.sleep(RPC_TIMEOUT * 2); > } > } catch (InterruptedException e) { > throw new IOException(e); > } > return super.preScannerNext(c, s, results, limit, hasMore); > } > } > {code} > This test was passing fine till 1.1.3 but started failing sometime before > 1.1.9 with an OutOfOrderScannerException. See PHOENIX-3702. [~lhofhansl] > mentioned that we have client heartbeats enabled and that should prevent us > from running into issues like this. FYI, this test fails with 1.2.3 version > of HBase too. > CC [~apurtell], [~jamestaylor] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HBASE-17714) Client heartbeats seems to be broken
[ https://issues.apache.org/jira/browse/HBASE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891268#comment-15891268 ] Andrew Purtell edited comment on HBASE-17714 at 3/1/17 11:18 PM: - e5c1a80 (HBASE-15645 hbase.rpc.timeout is not used in operations of HTable) is the commit that causes RenewLeaseIT to start failing. You have to apply HBASE-16420 (Fix source incompatibility of Table interface) after checking out e5c1a80 to get something that 4.x-HBase-1.1 will compile against. HBASE-15645 was shipped in 1.1.5, 1.2.2, and 1.3.0. What this change does is fix where the client was not actually honoring RPC timeouts prior to the change. [~samarthjain] are you sure RenewLeaseIT actually renews the lease or allows for a client heartbeat to happen before the RPC times out? The test sets a very short RPC timeout (2000ms) but makes no other configuration changes was (Author: apurtell): e5c1a80 (HBASE-15645 hbase.rpc.timeout is not used in operations of HTable) is the commit that causes RenewLeaseIT to start failing. You have to apply HBASE-16420 (Fix source incompatibility of Table interface) after checking out e5c1a80 to get something that 4.x-HBase-1.1 will compile against. HBASE-15645 was shipped in 1.1.5, 1.2.2, and 1.3.0. What this change does is fix where the client was not actually honoring RPC timeouts prior to the change. [~samarthjain] are you sure RenewLeaseIT actually renews the lease before the RPC times out? The test sets a very short RPC timeout (2000ms) > Client heartbeats seems to be broken > > > Key: HBASE-17714 > URL: https://issues.apache.org/jira/browse/HBASE-17714 > Project: HBase > Issue Type: Bug >Reporter: Samarth Jain > > We have a test in Phoenix where we introduce an artificial sleep of 2 times > the RPC timeout in preScannerNext() hook of a co-processor. > {code} > public static class SleepingRegionObserver extends SimpleRegionObserver { > public SleepingRegionObserver() {} > > @Override > public boolean preScannerNext(final > ObserverContext c, > final InternalScanner s, final List results, > final int limit, final boolean hasMore) throws IOException { > try { > if (SLEEP_NOW && > c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().equals(TABLE_NAME)) > { > Thread.sleep(RPC_TIMEOUT * 2); > } > } catch (InterruptedException e) { > throw new IOException(e); > } > return super.preScannerNext(c, s, results, limit, hasMore); > } > } > {code} > This test was passing fine till 1.1.3 but started failing sometime before > 1.1.9 with an OutOfOrderScannerException. See PHOENIX-3702. [~lhofhansl] > mentioned that we have client heartbeats enabled and that should prevent us > from running into issues like this. FYI, this test fails with 1.2.3 version > of HBase too. > CC [~apurtell], [~jamestaylor] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17714) Client heartbeats seems to be broken
[ https://issues.apache.org/jira/browse/HBASE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891268#comment-15891268 ] Andrew Purtell commented on HBASE-17714: e5c1a80 (HBASE-15645 hbase.rpc.timeout is not used in operations of HTable) is the commit that causes RenewLeaseIT to start failing. You have to apply HBASE-16420 (Fix source incompatibility of Table interface) after checking out e5c1a80 to get something that 4.x-HBase-1.1 will compile against. HBASE-15645 was shipped in 1.1.5, 1.2.2, and 1.3.0. What this change does is fix where the client was not actually honoring RPC timeouts prior to the change. [~samarthjain] are you sure RenewLeaseIT actually renews the lease before the RPC times out? The test sets a very short RPC timeout (2000ms) > Client heartbeats seems to be broken > > > Key: HBASE-17714 > URL: https://issues.apache.org/jira/browse/HBASE-17714 > Project: HBase > Issue Type: Bug >Reporter: Samarth Jain > > We have a test in Phoenix where we introduce an artificial sleep of 2 times > the RPC timeout in preScannerNext() hook of a co-processor. > {code} > public static class SleepingRegionObserver extends SimpleRegionObserver { > public SleepingRegionObserver() {} > > @Override > public boolean preScannerNext(final > ObserverContext c, > final InternalScanner s, final List results, > final int limit, final boolean hasMore) throws IOException { > try { > if (SLEEP_NOW && > c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().equals(TABLE_NAME)) > { > Thread.sleep(RPC_TIMEOUT * 2); > } > } catch (InterruptedException e) { > throw new IOException(e); > } > return super.preScannerNext(c, s, results, limit, hasMore); > } > } > {code} > This test was passing fine till 1.1.3 but started failing sometime before > 1.1.9 with an OutOfOrderScannerException. See PHOENIX-3702. [~lhofhansl] > mentioned that we have client heartbeats enabled and that should prevent us > from running into issues like this. FYI, this test fails with 1.2.3 version > of HBase too. > CC [~apurtell], [~jamestaylor] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891266#comment-15891266 ] Kahlil Oppenheimer commented on HBASE-17707: Good idea. I just set min_cost_need_balance to 0.02 and tableSkewCost to 4. > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kahlil Oppenheimer updated HBASE-17707: --- Status: Open (was: Patch Available) > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kahlil Oppenheimer updated HBASE-17707: --- Status: Patch Available (was: Open) > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kahlil Oppenheimer updated HBASE-17707: --- Attachment: HBASE-17707-04.patch > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17716) Formalize Scan Metric names
Karan Mehta created HBASE-17716: --- Summary: Formalize Scan Metric names Key: HBASE-17716 URL: https://issues.apache.org/jira/browse/HBASE-17716 Project: HBase Issue Type: Bug Components: metrics Reporter: Karan Mehta Assignee: Karan Mehta Priority: Minor HBase provides various metrics through the API's exposed by ScanMetrics class. The JIRA PHOENIX-3248 requires them to be surfaced through the Phoenix Metrics API. Currently these metrics are referred via hard-coded strings, which are not formal and can break the Phoenix API. Hence we need to refactor the code to assign enums for these metrics. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17715) expose a sane API to package a standalone client jar
[ https://issues.apache.org/jira/browse/HBASE-17715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891255#comment-15891255 ] Enis Soztutar commented on HBASE-17715: --- In theory, you can just use the hbase-shaded-client jar as the uber jar. But I am not sure it is tested with the MR jobs. > expose a sane API to package a standalone client jar > > > Key: HBASE-17715 > URL: https://issues.apache.org/jira/browse/HBASE-17715 > Project: HBase > Issue Type: Task >Reporter: Sergey Shelukhin >Assignee: Enis Soztutar > > TableMapReduceUtil currently exposes a method that takes some info from job > object iirc, and then makes a standalone jar and adds it to classpath. > It would be nice to have an API that one can call with minimum necessary > arguments (not dependent on job stuff, "tmpjars" and all that) that would > make a standalone client jar at a given path and let the caller manage it > after that. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17710: --- Attachment: 17710.branch-1.v2.txt > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Attachments: 17710.branch-1.v1.txt, 17710.branch-1.v2.txt, > 17710.v1.txt, 17710.v2.txt, 17710.v3.txt, 17710.v4.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777. > Because HFileSystem which inherits FileSystem doesn't override the method, > when we call HFileSystem.mkdirs(Path f), it tries to create a directory with > permission 777. > I've found that
[jira] [Created] (HBASE-17715) expose a sane API to package a standalone client jar
Sergey Shelukhin created HBASE-17715: Summary: expose a sane API to package a standalone client jar Key: HBASE-17715 URL: https://issues.apache.org/jira/browse/HBASE-17715 Project: HBase Issue Type: Task Reporter: Sergey Shelukhin Assignee: Enis Soztutar TableMapReduceUtil currently exposes a method that takes some info from job object iirc, and then makes a standalone jar and adds it to classpath. It would be nice to have an API that one can call with minimum necessary arguments (not dependent on job stuff, "tmpjars" and all that) that would make a standalone client jar at a given path and let the caller manage it after that. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891217#comment-15891217 ] Josh Elser commented on HBASE-17710: {quote} I think so. Patch v4 adds comment which refers to this JIRA. {quote} Great, thanks for confirming, Ted. Your solution makes sense given your explanations. Thanks for taking the time to investigate this one. > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Attachments: 17710.branch-1.v1.txt, 17710.v1.txt, 17710.v2.txt, > 17710.v3.txt, 17710.v4.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777.
[jira] [Updated] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-17710: --- Attachment: 17710.v4.txt bq. caveat of running in this standalone mode I think so. Patch v4 adds comment which refers to this JIRA. > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Attachments: 17710.branch-1.v1.txt, 17710.v1.txt, 17710.v2.txt, > 17710.v3.txt, 17710.v4.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777. > Because HFileSystem which inherits FileSystem doesn't override the method, > when we call HFileSystem.mkdirs(Path f), it
[jira] [Commented] (HBASE-16893) Use Iterables.removeIf instead of Iterator.remove in HBase filters
[ https://issues.apache.org/jira/browse/HBASE-16893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891193#comment-15891193 ] Hadoop QA commented on HBASE-16893: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 8s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 6s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s {color} | {color:green} hbase-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 7s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 33s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12834532/HBASE-16893.master.002.patch | | JIRA Issue | HBASE-16893 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux db417b362205 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5905/testReport/ | | modules | C: hbase-client U: hbase-client | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5905/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Use Iterables.removeIf instead of Iterator.remove in HBase filters > -- > > Key: HBASE-16893 > URL: https://issues.apache.org/jira/browse/HBASE-16893 > Project: HBase > Issue Type: Improvement >Reporter: Robert Yokota >
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891189#comment-15891189 ] Ted Yu commented on HBASE-17707: bq. total cost is 23.5, sum multiplier is 1062.0 min cost which need balance is 0.05 Can you set MIN_COST_NEED_BALANCE_KEY as 0.02 ? This would let the test pass. bq. I found that values as high as 4 worked That would be better than 0 > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891155#comment-15891155 ] Kahlil Oppenheimer commented on HBASE-17707: {code} +conf.setFloat("hbase.master.balancer.stochastic.tableSkewCost", 35); {code} I was trying to reset the config value for each test run, but I just added the config reset to the individual test. {code} +conf.setFloat("hbase.master.balancer.stochastic.tableSkewCost", 0); {code} This value needs to be set low for this test (in my testing I found that values as high as 4 worked) because if it is too high, at some point TableSkew is more costly than having duplicate regions on the same server and org.apache.hadoop.hbase.master.balancer.BalancerTestBase.assertRegionReplicaPlacement(BalancerTestBase.java:362) fails {code} +conf.setFloat(StochasticLoadBalancer.MIN_COST_NEED_BALANCE_KEY, 0.0f); +loadBalancer.setConf(conf); {code} The failing mock cluster is {code} new int[]{48, 53} {code}, which fails because the balancer decides to skip balancing because the mock cluster is not badly enough unbalanced (i.e. totalCost / sumMultiplier < .05). But then the test fails because the cluster doesn't get balanced. The log prints out "Skipping load balancing because balanced cluster; total cost is 23.5, sum multiplier is 1062.0 min cost which need balance is 0.05" > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891160#comment-15891160 ] Josh Elser commented on HBASE-17710: bq. Mostly because of the bug in RawLocalFileSystem.mkdirs(). We just need to plug the hole for region directory. But wouldn't we be using a similar call to make the rest of the dirs? Is this just a caveat of running in this standalone mode? If so, +1 (but adding a comment on the method to give a pointer to this issue would be great :)) > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Attachments: 17710.branch-1.v1.txt, 17710.v1.txt, 17710.v2.txt, > 17710.v3.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 >
[jira] [Updated] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kahlil Oppenheimer updated HBASE-17707: --- Status: Open (was: Patch Available) > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kahlil Oppenheimer updated HBASE-17707: --- Attachment: HBASE-17707-03.patch > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kahlil Oppenheimer updated HBASE-17707: --- Status: Patch Available (was: Open) > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch, HBASE-17707-03.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HBASE-16893) Use Iterables.removeIf instead of Iterator.remove in HBase filters
[ https://issues.apache.org/jira/browse/HBASE-16893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Yokota updated HBASE-16893: -- Assignee: Robert Yokota Status: Patch Available (was: Open) > Use Iterables.removeIf instead of Iterator.remove in HBase filters > -- > > Key: HBASE-16893 > URL: https://issues.apache.org/jira/browse/HBASE-16893 > Project: HBase > Issue Type: Improvement >Reporter: Robert Yokota >Assignee: Robert Yokota >Priority: Minor > Attachments: HBASE-16893.master.001.patch, > HBASE-16893.master.002.patch > > > This is a performance improvement to use Iterables.removeIf in the > filterRowCells method of DependentColumnFilter as described here: > https://rayokota.wordpress.com/2016/10/20/tips-on-writing-custom-hbase-filters/ -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16755) Honor flush policy under global memstore pressure
[ https://issues.apache.org/jira/browse/HBASE-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891067#comment-15891067 ] Enis Soztutar commented on HBASE-16755: --- bq. Enis Soztutar, yes, all of our current flush policies will fall back to all stores if none of the stores meets the threshold. Ok, thanks for looking. > Honor flush policy under global memstore pressure > - > > Key: HBASE-16755 > URL: https://issues.apache.org/jira/browse/HBASE-16755 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 1.3.1 > > Attachments: HBASE-16755.v0.patch > > > When global memstore reaches the low water mark, we pick the best flushable > region and flush all column families for it. This is a suboptimal approach in > the sense that it leads to an unnecessarily high file creation rate and IO > amplification due to compactions. We should still try to honor the underlying > FlushPolicy. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17706) TableSkewCostFunction improperly computes max skew
[ https://issues.apache.org/jira/browse/HBASE-17706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891014#comment-15891014 ] Hadoop QA commented on HBASE-17706: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 29s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 128m 16s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855434/HBASE-17706-02.patch | | JIRA Issue | HBASE-17706 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 448f88513123 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/5904/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/5904/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/5904/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5904/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5904/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890984#comment-15890984 ] Ted Yu commented on HBASE-17707: {code} +conf.setFloat("hbase.master.balancer.stochastic.tableSkewCost", 35); {code} 35 is the default. The above is not needed, right ? {code} +conf.setFloat("hbase.master.balancer.stochastic.tableSkewCost", 0); {code} Without the above, which subtest would time out ? {code} +conf.setFloat(StochasticLoadBalancer.MIN_COST_NEED_BALANCE_KEY, 0.0f); +loadBalancer.setConf(conf); {code} Without the above, which mock cluster would lead to test failure. We should be careful with changing default values. > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17714) Client heartbeats seems to be broken
[ https://issues.apache.org/jira/browse/HBASE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890951#comment-15890951 ] Andrew Purtell commented on HBASE-17714: Never mind, I found the link to the Phoenix JIRA. It is RenewLeaseIT > Client heartbeats seems to be broken > > > Key: HBASE-17714 > URL: https://issues.apache.org/jira/browse/HBASE-17714 > Project: HBase > Issue Type: Bug >Reporter: Samarth Jain > > We have a test in Phoenix where we introduce an artificial sleep of 2 times > the RPC timeout in preScannerNext() hook of a co-processor. > {code} > public static class SleepingRegionObserver extends SimpleRegionObserver { > public SleepingRegionObserver() {} > > @Override > public boolean preScannerNext(final > ObserverContext c, > final InternalScanner s, final List results, > final int limit, final boolean hasMore) throws IOException { > try { > if (SLEEP_NOW && > c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().equals(TABLE_NAME)) > { > Thread.sleep(RPC_TIMEOUT * 2); > } > } catch (InterruptedException e) { > throw new IOException(e); > } > return super.preScannerNext(c, s, results, limit, hasMore); > } > } > {code} > This test was passing fine till 1.1.3 but started failing sometime before > 1.1.9 with an OutOfOrderScannerException. See PHOENIX-3702. [~lhofhansl] > mentioned that we have client heartbeats enabled and that should prevent us > from running into issues like this. FYI, this test fails with 1.2.3 version > of HBase too. > CC [~apurtell], [~jamestaylor] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17714) Client heartbeats seems to be broken
[ https://issues.apache.org/jira/browse/HBASE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890947#comment-15890947 ] Andrew Purtell commented on HBASE-17714: [~samarthjain] Which test? Given that range (thanks) I can bisect to find the commit that broke this. > Client heartbeats seems to be broken > > > Key: HBASE-17714 > URL: https://issues.apache.org/jira/browse/HBASE-17714 > Project: HBase > Issue Type: Bug >Reporter: Samarth Jain > > We have a test in Phoenix where we introduce an artificial sleep of 2 times > the RPC timeout in preScannerNext() hook of a co-processor. > {code} > public static class SleepingRegionObserver extends SimpleRegionObserver { > public SleepingRegionObserver() {} > > @Override > public boolean preScannerNext(final > ObserverContext c, > final InternalScanner s, final List results, > final int limit, final boolean hasMore) throws IOException { > try { > if (SLEEP_NOW && > c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().equals(TABLE_NAME)) > { > Thread.sleep(RPC_TIMEOUT * 2); > } > } catch (InterruptedException e) { > throw new IOException(e); > } > return super.preScannerNext(c, s, results, limit, hasMore); > } > } > {code} > This test was passing fine till 1.1.3 but started failing sometime before > 1.1.9 with an OutOfOrderScannerException. See PHOENIX-3702. [~lhofhansl] > mentioned that we have client heartbeats enabled and that should prevent us > from running into issues like this. FYI, this test fails with 1.2.3 version > of HBase too. > CC [~apurtell], [~jamestaylor] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17709) New MoveCostFunction that respects MaxMovePercent
[ https://issues.apache.org/jira/browse/HBASE-17709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890929#comment-15890929 ] Hadoop QA commented on HBASE-17709: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 32m 46s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 11s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 141m 38s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.TestRegionRebalancing | | Timed out junit tests | org.apache.hadoop.hbase.client.TestHCM | | | org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded | | | org.apache.hadoop.hbase.client.TestIllegalTableDescriptor | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855420/HBASE-17709-01.patch | | JIRA Issue | HBASE-17709 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux ac0d98487ee4 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/5902/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/5902/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5902/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5902/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890914#comment-15890914 ] Hadoop QA commented on HBASE-17710: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 4s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 22s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 31m 37s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 106m 8s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 153m 28s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855413/17710.v2.txt | | JIRA Issue | HBASE-17710 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 98f495113370 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5901/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5901/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > HBase in standalone mode creates directories with 777 permission >
[jira] [Commented] (HBASE-17662) Disable in-memory flush when replaying from WAL
[ https://issues.apache.org/jira/browse/HBASE-17662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890903#comment-15890903 ] Hudson commented on HBASE-17662: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2594 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2594/]) HBASE-17662 Disable in-memory flush when replaying from WAL (anoopsamjohn: rev 613bcb3622ecb1783c030f34ea2975280e1c43c1) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestWALReplay.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java > Disable in-memory flush when replaying from WAL > --- > > Key: HBASE-17662 > URL: https://issues.apache.org/jira/browse/HBASE-17662 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Anastasia Braginsky >Assignee: Anastasia Braginsky > Fix For: 2.0.0 > > Attachments: HBASE-17662-V02.patch, HBASE-17662-V03.patch, > HBASE-17662-V04.patch, HBASE-17662-V05.patch, HBASE-17662-V06.patch, > HBASE-17662-V08.patch, HBASE-17662-V09-II.patch, HBASE-17662-V09.patch > > > When replaying the edits from WAL, the region's updateLock is not taken, > because a single threaded action is assumed. However, the thread-safeness of > the in-memory flush of CompactingMemStore is based on taking the region's > updateLock. > The in-memory flush can be skipped in the replay time (anyway everything is > flushed to disk just after the replay). Therefore it is acceptable to just > skip the in-memory flush action while the updates come as part of replay from > WAL. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16755) Honor flush policy under global memstore pressure
[ https://issues.apache.org/jira/browse/HBASE-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890901#comment-15890901 ] Gary Helmling commented on HBASE-16755: --- [~enis], yes, all of our current flush policies will fall back to all stores if none of the stores meets the threshold. > Honor flush policy under global memstore pressure > - > > Key: HBASE-16755 > URL: https://issues.apache.org/jira/browse/HBASE-16755 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 1.3.1 > > Attachments: HBASE-16755.v0.patch > > > When global memstore reaches the low water mark, we pick the best flushable > region and flush all column families for it. This is a suboptimal approach in > the sense that it leads to an unnecessarily high file creation rate and IO > amplification due to compactions. We should still try to honor the underlying > FlushPolicy. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HBASE-17714) Client heartbeats seems to be broken
Samarth Jain created HBASE-17714: Summary: Client heartbeats seems to be broken Key: HBASE-17714 URL: https://issues.apache.org/jira/browse/HBASE-17714 Project: HBase Issue Type: Bug Reporter: Samarth Jain We have a test in Phoenix where we introduce an artificial sleep of 2 times the RPC timeout in preScannerNext() hook of a co-processor. {code} public static class SleepingRegionObserver extends SimpleRegionObserver { public SleepingRegionObserver() {} @Override public boolean preScannerNext(final ObserverContext c, final InternalScanner s, final List results, final int limit, final boolean hasMore) throws IOException { try { if (SLEEP_NOW && c.getEnvironment().getRegion().getRegionInfo().getTable().getNameAsString().equals(TABLE_NAME)) { Thread.sleep(RPC_TIMEOUT * 2); } } catch (InterruptedException e) { throw new IOException(e); } return super.preScannerNext(c, s, results, limit, hasMore); } } {code} This test was passing fine till 1.1.3 but started failing sometime before 1.1.9 with an OutOfOrderScannerException. See PHOENIX-3702. [~lhofhansl] mentioned that we have client heartbeats enabled and that should prevent us from running into issues like this. FYI, this test fails with 1.2.3 version of HBase too. CC [~apurtell], [~jamestaylor] -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16755) Honor flush policy under global memstore pressure
[ https://issues.apache.org/jira/browse/HBASE-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890899#comment-15890899 ] Gary Helmling commented on HBASE-16755: --- [~Apache9], yes, currently with either FlushAllLargeStoresPolicy or FlushNonSloppyStoresFirstPolicy, we still will fall back to all stores in the case that none of the stores meets the flush threshold. So we will still ensure that something is always flushed. Thanks for taking a look, I'll go ahead and commit. > Honor flush policy under global memstore pressure > - > > Key: HBASE-16755 > URL: https://issues.apache.org/jira/browse/HBASE-16755 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 1.3.1 > > Attachments: HBASE-16755.v0.patch > > > When global memstore reaches the low water mark, we pick the best flushable > region and flush all column families for it. This is a suboptimal approach in > the sense that it leads to an unnecessarily high file creation rate and IO > amplification due to compactions. We should still try to honor the underlying > FlushPolicy. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890893#comment-15890893 ] Hadoop QA commented on HBASE-17707: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 5s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 46s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 139m 17s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855414/HBASE-17707-02.patch | | JIRA Issue | HBASE-17707 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 70f0d438e89e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/5900/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5900/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >
[jira] [Commented] (HBASE-16755) Honor flush policy under global memstore pressure
[ https://issues.apache.org/jira/browse/HBASE-16755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890881#comment-15890881 ] Enis Soztutar commented on HBASE-16755: --- bq. We pass true to make sure that we flush something out. So if we can make sure that we can flush something out when we pass false here, I think the patch is OK. Indeed. when memstore is full, it is a force flush so we have to make sure that no matter what we are flushing something. Does the patch ensure that it happens? > Honor flush policy under global memstore pressure > - > > Key: HBASE-16755 > URL: https://issues.apache.org/jira/browse/HBASE-16755 > Project: HBase > Issue Type: Improvement > Components: regionserver >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 1.3.1 > > Attachments: HBASE-16755.v0.patch > > > When global memstore reaches the low water mark, we pick the best flushable > region and flush all column families for it. This is a suboptimal approach in > the sense that it leads to an unnecessarily high file creation rate and IO > amplification due to compactions. We should still try to honor the underlying > FlushPolicy. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache
[ https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890868#comment-15890868 ] Hudson commented on HBASE-16630: SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #129 (See [https://builds.apache.org/job/HBase-1.3-JDK8/129/]) HBASE-16630 Fragmentation in long running Bucket Cache (ramkrishna: rev e08f4bf6843d677a942137873328dc035616cf6f) * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java > Fragmentation in long running Bucket Cache > -- > > Key: HBASE-16630 > URL: https://issues.apache.org/jira/browse/HBASE-16630 > Project: HBase > Issue Type: Bug > Components: BucketCache >Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3 >Reporter: deepankar >Assignee: deepankar >Priority: Critical > Fix For: 2.0.0, 1.3.1, 1.2.6 > > Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, > HBASE-16630.patch, HBASE-16630-v2.patch, HBASE-16630-v3-branch-1.patch, > HBASE-16630-v3-branch-1.X.patch, HBASE-16630-v3.patch, > HBASE-16630-v4-branch-1.X.patch > > > As we are running bucket cache for a long time in our system, we are > observing cases where some nodes after some time does not fully utilize the > bucket cache, in some cases it is even worse in the sense they get stuck at a > value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables > are configured in-memory for simplicity sake). > We took a heap dump and analyzed what is happening and saw that is classic > case of fragmentation, current implementation of BucketCache (mainly > BucketAllocator) relies on the logic that fullyFreeBuckets are available for > switching/adjusting cache usage between different bucketSizes . But once a > compaction / bulkload happens and the blocks are evicted from a bucket size , > these are usually evicted from random places of the buckets of a bucketSize > and thus locking the number of buckets associated with a bucketSize and in > the worst case of the fragmentation we have seen some bucketSizes with > occupancy ratio of < 10 % But they dont have any completelyFreeBuckets to > share with the other bucketSize. > Currently the existing eviction logic helps in the cases where cache used is > more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also > done, the eviction (freeSpace function) will not evict anything and the cache > utilization will be stuck at that value without any allocations for other > required sizes. > The fix for this we came up with is simple that we do deFragmentation ( > compaction) of the bucketSize and thus increasing the occupancy ratio and > also freeing up the buckets to be fullyFree, this logic itself is not > complicated as the bucketAllocator takes care of packing the blocks in the > buckets, we need evict and re-allocate the blocks for all the BucketSizes > that dont fit the criteria. > I am attaching an initial patch just to give an idea of what we are thinking > and I'll improve it based on the comments from the community. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17501) NullPointerException after Datanodes Decommissioned and Terminated
[ https://issues.apache.org/jira/browse/HBASE-17501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890808#comment-15890808 ] Hadoop QA commented on HBASE-17501: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 10s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 29m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 142m 52s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 185m 31s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestHRegion | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855400/HBASE_17501.patch.v3 | | JIRA Issue | HBASE-17501 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux d994067a55f1 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 613bcb3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/5899/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/5899/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results |
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890791#comment-15890791 ] Ted Yu commented on HBASE-17710: bq. why this issue is only arising down below? Mostly because of the bug in RawLocalFileSystem.mkdirs(). We just need to plug the hole for region directory. > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Attachments: 17710.branch-1.v1.txt, 17710.v1.txt, 17710.v2.txt, > 17710.v3.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/c8eadeb7dead8fda9729b8e9b10c4929/.tmp > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_MINUTE/ca9f9754ae9ae4cdc3e1b0523eecc390/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY/8412e8a8aec5d6307943fac78ce14c7a/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:18 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_DAILY/7c3358aba91ea0d76ddd8bc3ceb2d578/0 > {code} > My analysis is as follows: > FileSystem.mkdirs(Path f) method creates a directory with permission 777. > Because HFileSystem which inherits FileSystem doesn't override the method, >
[jira] [Commented] (HBASE-17710) HBase in standalone mode creates directories with 777 permission
[ https://issues.apache.org/jira/browse/HBASE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890782#comment-15890782 ] Hadoop QA commented on HBASE-17710: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 4s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 18m 52s {color} | {color:red} Docker failed to build yetus/hbase:8d52d23. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12855429/17710.v3.txt | | JIRA Issue | HBASE-17710 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/5903/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > HBase in standalone mode creates directories with 777 permission > > > Key: HBASE-17710 > URL: https://issues.apache.org/jira/browse/HBASE-17710 > Project: HBase > Issue Type: Bug > Components: regionserver >Affects Versions: 1.1.2 > Environment: HDP-2.5.3 >Reporter: Toshihiro Suzuki >Assignee: Ted Yu > Attachments: 17710.branch-1.v1.txt, 17710.v1.txt, 17710.v2.txt, > 17710.v3.txt > > > HBase in standalone mode creates directories with 777 permission in > hbase.rootdir. > Ambari metrics collector defaults to standalone mode. > {code} > # find /var/lib/ambari-metrics-collector/hbase -perm 777 -type d -exec ls -ld > {} \; > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/namespace/d0cca53847904f4b4add1caa0ce3a9af/info > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/meta > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/hbase/backup/cbceb8fccd968b4b4583365d4dc6e377/session > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.CATALOG/2f4ce2294cd21cecb58fd1aca5646144/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/0eb67274ece8a4a26cfeeef2c6d4cd37/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.SEQUENCE/aef86710a4005f98e2dc90675f2eb325/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.STATS/5b1d955e255e55979621214a7e4083b8/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/SYSTEM.FUNCTION/32c033735cf144bac5637de23f7f7dd0/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRICS_METADATA/e420dfa799742fe4516ad1e4deefb793/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/HOSTED_APPS_METADATA/110be63e2a9994121fc5b48d663daf2c/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/CONTAINER_METRICS/a103719f87e8430635abf51a7fe98637/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/cdb1d032beb90e350ce309e5d383c78e/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD/294deab47187494e845a5199702b4d04/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/1a263b4fe068ef2db5ba1c3e45553354/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE/48f94dfb0161d8a28f645d2e1a473235/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY/6d096ac3e70e54dd4a8612e17cfc4b11/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:17 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_DAILY/e81850d62da64c8d1c67be309f136e23/0 > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/0 > drwxrwxrwx. 2 ams hadoop 6 Mar 1 02:21 > /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE/b43ff796de887197834ad62fdb612b59/.tmp > drwxrwxrwx. 2 ams hadoop 45 Mar 1 02:21 >
[jira] [Comment Edited] (HBASE-17709) New MoveCostFunction that respects MaxMovePercent
[ https://issues.apache.org/jira/browse/HBASE-17709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890779#comment-15890779 ] Kahlil Oppenheimer edited comment on HBASE-17709 at 3/1/17 6:45 PM: Not sure why this build failed, but I resubmitted a patch and made sure it was properly based off master. I can build locally and all tests pass locally [~tedyu] was (Author: kahliloppenheimer): Not sure why this build failed, but I resubmitted a patch and made sure it was properly based off master. I can build locally and all tests pass locally. > New MoveCostFunction that respects MaxMovePercent > - > > Key: HBASE-17709 > URL: https://issues.apache.org/jira/browse/HBASE-17709 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17709-00.patch, HBASE-17709-01.patch > > > The balancer does not fully respect the maxMovePercent configuration. > Specifically, if the number of regions being moved is less than 600, the > balancer currently allows that number of region moves regardless of what > value is set for maxMovePercent. > This patch fixes that behavior and simplifies the moveCost function as well. > In addition, this patch adds short-circuiting logic to the balancer to > terminate early once the maximum number of moves are reached (and assuming > the new plan has enough of a cost improvement). -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HBASE-17707) New More Accurate TableSkew Balancer/Generator
[ https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890780#comment-15890780 ] Kahlil Oppenheimer commented on HBASE-17707: I addressed all of your feedback and fixed the broken test. The changes should be uploaded both to here and ReviewBoard [~tedyu] > New More Accurate TableSkew Balancer/Generator > -- > > Key: HBASE-17707 > URL: https://issues.apache.org/jira/browse/HBASE-17707 > Project: HBase > Issue Type: New Feature > Components: Balancer >Affects Versions: 1.2.0 > Environment: CentOS Derivative with a derivative of the 3.18.43 > kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches. >Reporter: Kahlil Oppenheimer >Priority: Minor > Labels: patch > Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, > HBASE-17707-02.patch > > > This patch includes new version of the TableSkewCostFunction and a new > TableSkewCandidateGenerator. > The new TableSkewCostFunction computes table skew by counting the minimal > number of region moves required for a given table to perfectly balance the > table across the cluster (i.e. as if the regions from that table had been > round-robin-ed across the cluster). This number of moves is computer for each > table, then normalized to a score between 0-1 by dividing by the number of > moves required in the absolute worst case (i.e. the entire table is stored on > one server), and stored in an array. The cost function then takes a weighted > average of the average and maximum value across all tables. The weights in > this average are configurable to allow for certain users to more strongly > penalize situations where one table is skewed versus where every table is a > little bit skewed. To better spread this value more evenly across the range > 0-1, we take the square root of the weighted average to get the final value. > The new TableSkewCandidateGenerator generates region moves/swaps to optimize > the above TableSkewCostFunction. It first simply tries to move regions until > each server has the right number of regions, then it swaps regions around > such that each region swap improves table skew across the cluster. > We tested the cost function and generator in our production clusters with > 100s of TBs of data and 100s of tables across dozens of servers and found > both to be very performant and accurate. -- This message was sent by Atlassian JIRA (v6.3.15#6346)