[jira] [Updated] (HBASE-17661) fix the queue length passed to FastPathBalancedQueueRpcExecutor

2017-02-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17661:
--
Status: Patch Available  (was: Open)

> fix the queue length passed to FastPathBalancedQueueRpcExecutor
> ---
>
> Key: HBASE-17661
> URL: https://issues.apache.org/jira/browse/HBASE-17661
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17661.branch-1.v0.patch, HBASE-17661.v0.patch
>
>
> {code:title=SimpleRpcScheduler.java|borderStyle=solid}
> if (callqReadShare > 0) {
>   // at least 1 read handler and 1 write handler
>   callExecutor = new RWQueueRpcExecutor("deafult.RWQ", Math.max(2, 
> handlerCount),
> maxQueueLength, priority, conf, server);
> } else {
>   if (RpcExecutor.isFifoQueueType(callQueueType)) {
> callExecutor = new FastPathBalancedQueueRpcExecutor("deafult.FPBQ", 
> handlerCount,
> maxPriorityQueueLength, priority, conf, server);
> } else {
> callExecutor = new BalancedQueueRpcExecutor("deafult.BQ", 
> handlerCount, maxQueueLength,
> priority, conf, server);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17661) fix the queue length passed to FastPathBalancedQueueRpcExecutor

2017-02-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17661:
--
Attachment: HBASE-17661.v0.patch

> fix the queue length passed to FastPathBalancedQueueRpcExecutor
> ---
>
> Key: HBASE-17661
> URL: https://issues.apache.org/jira/browse/HBASE-17661
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17661.branch-1.v0.patch, HBASE-17661.v0.patch
>
>
> {code:title=SimpleRpcScheduler.java|borderStyle=solid}
> if (callqReadShare > 0) {
>   // at least 1 read handler and 1 write handler
>   callExecutor = new RWQueueRpcExecutor("deafult.RWQ", Math.max(2, 
> handlerCount),
> maxQueueLength, priority, conf, server);
> } else {
>   if (RpcExecutor.isFifoQueueType(callQueueType)) {
> callExecutor = new FastPathBalancedQueueRpcExecutor("deafult.FPBQ", 
> handlerCount,
> maxPriorityQueueLength, priority, conf, server);
> } else {
> callExecutor = new BalancedQueueRpcExecutor("deafult.BQ", 
> handlerCount, maxQueueLength,
> priority, conf, server);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17661) fix the queue length passed to FastPathBalancedQueueRpcExecutor

2017-02-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17661:
--
Status: Open  (was: Patch Available)

> fix the queue length passed to FastPathBalancedQueueRpcExecutor
> ---
>
> Key: HBASE-17661
> URL: https://issues.apache.org/jira/browse/HBASE-17661
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17661.branch-1.v0.patch
>
>
> {code:title=SimpleRpcScheduler.java|borderStyle=solid}
> if (callqReadShare > 0) {
>   // at least 1 read handler and 1 write handler
>   callExecutor = new RWQueueRpcExecutor("deafult.RWQ", Math.max(2, 
> handlerCount),
> maxQueueLength, priority, conf, server);
> } else {
>   if (RpcExecutor.isFifoQueueType(callQueueType)) {
> callExecutor = new FastPathBalancedQueueRpcExecutor("deafult.FPBQ", 
> handlerCount,
> maxPriorityQueueLength, priority, conf, server);
> } else {
> callExecutor = new BalancedQueueRpcExecutor("deafult.BQ", 
> handlerCount, maxQueueLength,
> priority, conf, server);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17661) fix the queue length passed to FastPathBalancedQueueRpcExecutor

2017-02-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17661:
--
Assignee: ChiaPing Tsai
  Status: Patch Available  (was: Open)

> fix the queue length passed to FastPathBalancedQueueRpcExecutor
> ---
>
> Key: HBASE-17661
> URL: https://issues.apache.org/jira/browse/HBASE-17661
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17661.branch-1.v0.patch
>
>
> {code:title=SimpleRpcScheduler.java|borderStyle=solid}
> if (callqReadShare > 0) {
>   // at least 1 read handler and 1 write handler
>   callExecutor = new RWQueueRpcExecutor("deafult.RWQ", Math.max(2, 
> handlerCount),
> maxQueueLength, priority, conf, server);
> } else {
>   if (RpcExecutor.isFifoQueueType(callQueueType)) {
> callExecutor = new FastPathBalancedQueueRpcExecutor("deafult.FPBQ", 
> handlerCount,
> maxPriorityQueueLength, priority, conf, server);
> } else {
> callExecutor = new BalancedQueueRpcExecutor("deafult.BQ", 
> handlerCount, maxQueueLength,
> priority, conf, server);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17661) fix the queue length passed to FastPathBalancedQueueRpcExecutor

2017-02-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17661:
--
Attachment: HBASE-17661.branch-1.v0.patch

> fix the queue length passed to FastPathBalancedQueueRpcExecutor
> ---
>
> Key: HBASE-17661
> URL: https://issues.apache.org/jira/browse/HBASE-17661
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17661.branch-1.v0.patch
>
>
> {code:title=SimpleRpcScheduler.java|borderStyle=solid}
> if (callqReadShare > 0) {
>   // at least 1 read handler and 1 write handler
>   callExecutor = new RWQueueRpcExecutor("deafult.RWQ", Math.max(2, 
> handlerCount),
> maxQueueLength, priority, conf, server);
> } else {
>   if (RpcExecutor.isFifoQueueType(callQueueType)) {
> callExecutor = new FastPathBalancedQueueRpcExecutor("deafult.FPBQ", 
> handlerCount,
> maxPriorityQueueLength, priority, conf, server);
> } else {
> callExecutor = new BalancedQueueRpcExecutor("deafult.BQ", 
> handlerCount, maxQueueLength,
> priority, conf, server);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17661) fix the queue length passed to FastPathBalancedQueueRpcExecutor

2017-02-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17661:
--
Summary: fix the queue length passed to FastPathBalancedQueueRpcExecutor  
(was: fix the queue size passed to FastPathBalancedQueueRpcExecutor)

> fix the queue length passed to FastPathBalancedQueueRpcExecutor
> ---
>
> Key: HBASE-17661
> URL: https://issues.apache.org/jira/browse/HBASE-17661
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
>
> {code:title=SimpleRpcScheduler.java|borderStyle=solid}
> if (callqReadShare > 0) {
>   // at least 1 read handler and 1 write handler
>   callExecutor = new RWQueueRpcExecutor("deafult.RWQ", Math.max(2, 
> handlerCount),
> maxQueueLength, priority, conf, server);
> } else {
>   if (RpcExecutor.isFifoQueueType(callQueueType)) {
> callExecutor = new FastPathBalancedQueueRpcExecutor("deafult.FPBQ", 
> handlerCount,
> maxPriorityQueueLength, priority, conf, server);
> } else {
> callExecutor = new BalancedQueueRpcExecutor("deafult.BQ", 
> handlerCount, maxQueueLength,
> priority, conf, server);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17661) fix the queue size passed to FastPathBalancedQueueRpcExecutor

2017-02-18 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-17661:
-

 Summary: fix the queue size passed to 
FastPathBalancedQueueRpcExecutor
 Key: HBASE-17661
 URL: https://issues.apache.org/jira/browse/HBASE-17661
 Project: HBase
  Issue Type: Bug
Reporter: ChiaPing Tsai
Priority: Minor
 Fix For: 2.0.0, 1.4.0


{code:title=SimpleRpcScheduler.java|borderStyle=solid}
if (callqReadShare > 0) {
  // at least 1 read handler and 1 write handler
  callExecutor = new RWQueueRpcExecutor("deafult.RWQ", Math.max(2, 
handlerCount),
maxQueueLength, priority, conf, server);
} else {
  if (RpcExecutor.isFifoQueueType(callQueueType)) {
callExecutor = new FastPathBalancedQueueRpcExecutor("deafult.FPBQ", 
handlerCount,
maxPriorityQueueLength, priority, conf, server);
} else {
callExecutor = new BalancedQueueRpcExecutor("deafult.BQ", handlerCount, 
maxQueueLength,
priority, conf, server);
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871302#comment-15871302
 ] 

ChiaPing Tsai commented on HBASE-17623:
---

bq. Did u get a chance to run this for a longer time (May be use PE with 100GB+ 
data) and see the GC impacts (Positive or negative)?
copy that.

bq. You test with G1 or CMS?
CMS

Thanks for the feedback. [~anoop.hbase]


> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: HBASE-17623.branch-1.v1.patch

# fix the findbugs warn.
# TestSimpleRpcScheduler passes locally.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.branch-1.v1.patch, HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Open  (was: Patch Available)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Open  (was: Patch Available)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Fix Version/s: 1.4.0

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: HBASE-17623.branch-1.v0.patch

I borrow some codes from HBASE-11862 and HBASE-15077 for branch-1.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.branch-1.v0.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: before(snappy_hfilesize=5.04GB).png
after(snappy_hfilesize=5.04GB).png

The tests runs for 30 minutes.

|| ||before||after||
|memory allocation|241.18 GB|150.89 GB|
|pause time|107.59 s|60.732 s|

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.v0.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-15 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868193#comment-15868193
 ] 

ChiaPing Tsai commented on HBASE-17623:
---

[~yuzhih...@gmail.com]
I will address your comment in the day. Thanks for your feedback.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: after(snappy_hfilesize=755MB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.v0.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-15 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: after(snappy_hfilesize=755MB).png
before(snappy_hfilesize=755MB).png

The attachment is the GC activity. It illustrates that the lower number of GC 
is due to memory reusing.
The size of memory allocation is presented below.
||before||after||
|35GB|22GB|

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: after(snappy_hfilesize=755MB).png, 
> before(snappy_hfilesize=755MB).png, HBASE-17623.v0.patch, 
> HBASE-17623.v1.patch, HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-12 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-12 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: HBASE-17623.v1.patch

TestRecoveredEdits pass locally.
Retry.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-12 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Open  (was: Patch Available)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v1.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-12 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: HBASE-17623.v1.patch

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch, HBASE-17623.v1.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-11 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch, HBASE-17623.v1.patch, memory 
> allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-11 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862693#comment-15862693
 ] 

ChiaPing Tsai commented on HBASE-17623:
---

[~yuzhih...@gmail.com]
h4. environment:
- 20 rows
- 5 families
- No compaction
- No split
- case 1: no cache, no compression, hfiles size = 2.05 GB
- case 2: cache-on-write, no compression, hfiles size = 2.05 GB
- case 3: no cache, snappy,  hfiles size = 129 MB

h4. benchmark
||case||before||after||
|case 1|7.53 GB|5.60 GB|
|case 2|7.84 GB|7.37 GB|
|case 3|5.78 GB|3.69 GB|

The attached file has more details.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-11 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: memory allocation measurement.xlsx

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-11 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Status: Open  (was: Patch Available)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-10 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862206#comment-15862206
 ] 

ChiaPing Tsai commented on HBASE-17623:
---

[~yuzhih...@gmail.com]

{quote}
Have you measured the memory savings through the change ?
Please fix javadoc warning.
{quote}
I was delayed by something. I'll be back. Thanks for your review.

bq. Why is the above check removed ?
h3. before patch
If the compression or encryption is configured, the 
HFileBlockDefaultEncodingContext#compressAfterEncoding will return a new bytes 
array. Otherwise,  the return bytes array is referenced to the input 
(uncompressedBlockBytesWithHeader). We need to put the head to both of 
onDiskBlockBytesWithHeader and uncompressedBlockBytesWithHeader if they have 
different bytes array. Otherwise, we put the head to one of them. 
h3. after patch
The check can be removed, because this patch makes the 
uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader always has the 
different bytes array. As a result, we need to put the head to both of them. 

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-09 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Attachment: HBASE-17623.v0.patch

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-09 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Assignee: ChiaPing Tsai
  Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17623.v0.patch
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-02-09 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17623:
--
Fix Version/s: 2.0.0

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
>
> There are two improvements.
> # The uncompressedBlockBytesWithHeader and onDiskBlockBytesWithHeader should 
> maintain a bytes array which can be reused when building the hfile.
> # The uncompressedBlockBytesWithHeader/onDiskBlockBytesWithHeader is copied 
> to an new bytes array only when we need to cache the block.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Open  (was: Patch Available)

retry

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch, HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Patch Available  (was: Open)

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch, HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Attachment: HBASE-17613.v2.patch

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch, HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Attachment: HBASE-17613.v2.patch

address [~yuzhih...@gmail.com]'s comment.

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Patch Available  (was: Open)

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Open  (was: Patch Available)

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch, 
> HBASE-17613.v2.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Patch Available  (was: Open)

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Attachment: HBASE-17613.v1.patch

v1 adds the category to TestFSWALEntry

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch, HBASE-17613.v1.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Status: Open  (was: Patch Available)

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17613:
--
Attachment: HBASE-17613.v0.patch

> avoid copy of family when initializing the FSWALEntry
> -
>
> Key: HBASE-17613
> URL: https://issues.apache.org/jira/browse/HBASE-17613
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17613.v0.patch
>
>
> We should compare the families before cloning it.
> {noformat}
> Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
> for (Cell cell : cells) {
>   if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
>   // TODO: Avoid this clone?
> familySet.add(CellUtil.cloneFamily(cell));
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17613) avoid copy of family when initializing the FSWALEntry

2017-02-08 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-17613:
-

 Summary: avoid copy of family when initializing the FSWALEntry
 Key: HBASE-17613
 URL: https://issues.apache.org/jira/browse/HBASE-17613
 Project: HBase
  Issue Type: Improvement
Reporter: ChiaPing Tsai
Priority: Minor
 Fix For: 2.0.0


We should compare the families before cloning it.
{noformat}
Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR);
for (Cell cell : cells) {
  if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) {
  // TODO: Avoid this clone?
familySet.add(CellUtil.cloneFamily(cell));
  }
}
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16731) Inconsistent results from the Get/Scan if we use the empty FilterList

2017-02-07 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15857314#comment-15857314
 ] 

ChiaPing Tsai commented on HBASE-16731:
---

[~pankaj2461]

Would you please create a separate JIRA to enhance that? Thanks.

> Inconsistent results from the Get/Scan if we use the empty FilterList
> -
>
> Key: HBASE-16731
> URL: https://issues.apache.org/jira/browse/HBASE-16731
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16731.v0.patch, HBASE-16731.v1.patch, 
> HBASE-16731.v2.patch, HBASE-16731.v3.patch
>
>
> RSRpcServices#get() converts the Get to Scan without 
> scan#setLoadColumnFamiliesOnDemand. It causes that the result retrieved from 
> Get and Scan will be different if we use the empty filter. Scan doesn't 
> return any data but Get does.
> see [HBASE-16729 |https://issues.apache.org/jira/browse/HBASE-16729]
> Any comments? Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17519) Rollback the removed cells

2017-01-25 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17519:
--
Status: Patch Available  (was: Open)

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch, HBASE-17519.branch-1.v2.patch, 
> HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17519) Rollback the removed cells

2017-01-25 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17519:
--
Attachment: HBASE-17519.branch-1.v2.patch

re-run the QA. Third time's a charm.

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch, HBASE-17519.branch-1.v2.patch, 
> HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17519) Rollback the removed cells

2017-01-25 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17519:
--
Status: Open  (was: Patch Available)

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch, HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17519) Rollback the removed cells

2017-01-25 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837553#comment-15837553
 ] 

ChiaPing Tsai commented on HBASE-17519:
---

TestHCM always pass on my mac, ubuntu, and centos.
I will check out it.

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch, HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17519) Rollback the removed cells

2017-01-24 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17519:
--
Status: Patch Available  (was: Open)

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch, HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17519) Rollback the removed cells

2017-01-24 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17519:
--
Status: Open  (was: Patch Available)

retry

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch, HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17519) Rollback the removed cells

2017-01-24 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17519:
--
Attachment: HBASE-17519.branch-1.v2.patch

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch, HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17519) Rollback the removed cells

2017-01-24 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837138#comment-15837138
 ] 

ChiaPing Tsai commented on HBASE-17519:
---

All failed tests pass locally.
[~yuzhih...@gmail.com] Would you please take a look at v2 ? Thanks.

> Rollback the removed cells
> --
>
> Key: HBASE-17519
> URL: https://issues.apache.org/jira/browse/HBASE-17519
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17519.branch-1.v0.patch, 
> HBASE-17519.branch-1.v1.patch, HBASE-17519.branch-1.v1.patch, 
> HBASE-17519.branch-1.v2.patch
>
>
> The Store#upsert removes the old cells but we don’t rollback the removed 
> cells when failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17510) DefaultMemStore gets the wrong heap size after rollback

2017-01-23 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17510:
--
Attachment: HBASE-17510.branch-1.v0.patch

> DefaultMemStore gets the wrong heap size after rollback
> ---
>
> Key: HBASE-17510
> URL: https://issues.apache.org/jira/browse/HBASE-17510
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17510.branch-1.v0.patch
>
>
> We should calculate the size of “found” rather than “cell” because the offset 
> value may cause the difference heap size between “cell” and “found”.
> {code:title=DefaultMemStore.java|borderStyle=solid}
>   @Override
>   public void rollback(Cell cell) {
> // If the key is in the memstore, delete it. Update this.size.
> found = this.cellSet.get(cell);
> if (found != null && found.getSequenceId() == cell.getSequenceId()) {
>   removeFromCellSet(cell);
>   long s = heapSizeChange(cell, true);
>   this.size.addAndGet(-s);
> }
>   }
> {code}
> {code:title=KeyValue.java|borderStyle=solid}
>   @Override
>   public long heapSize() {
> return ClassSize.align(sum) +
> (offset == 0
>   ? ClassSize.sizeOf(bytes, length) // count both length and object 
> overhead
>   : length);// only count the number of bytes
>   }
> {code}
> The wrong heap size of store will block the HRegion#doClose because the 
> HRegion#memstoreSize will always be bigger than zero even if we flush the 
> store.
> {code:title=HRegion.java|borderStyle=solid}
> while (this.memstoreSize.get() > 0) {
>   try {
> if (flushCount++ > 0) {
>   int actualFlushes = flushCount - 1;
>   if (actualFlushes > 5) {
> // If we tried 5 times and are unable to clear memory, abort
> // so we do not lose data
> throw new DroppedSnapshotException("Failed clearing memory 
> after " +
>   actualFlushes + " attempts on region: " +
> Bytes.toStringBinary(getRegionInfo().getRegionName()));
>   }
>   LOG.info("Running extra flush, " + actualFlushes +
> " (carrying snapshot?) " + this);
> }
> internalFlushcache(status);
>   } catch (IOException ioe) {
> status.setStatus("Failed flush " + this + ", putting online 
> again");
> synchronized (writestate) {
>   writestate.writesEnabled = true;
> }
> // Have to throw to upper layers.  I can't abort server from here.
> throw ioe;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17510) DefaultMemStore gets the wrong heap size after rollback

2017-01-23 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17510:
--
Assignee: ChiaPing Tsai
  Status: Patch Available  (was: Open)

> DefaultMemStore gets the wrong heap size after rollback
> ---
>
> Key: HBASE-17510
> URL: https://issues.apache.org/jira/browse/HBASE-17510
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 1.4.0
>
> Attachments: HBASE-17510.branch-1.v0.patch
>
>
> We should calculate the size of “found” rather than “cell” because the offset 
> value may cause the difference heap size between “cell” and “found”.
> {code:title=DefaultMemStore.java|borderStyle=solid}
>   @Override
>   public void rollback(Cell cell) {
> // If the key is in the memstore, delete it. Update this.size.
> found = this.cellSet.get(cell);
> if (found != null && found.getSequenceId() == cell.getSequenceId()) {
>   removeFromCellSet(cell);
>   long s = heapSizeChange(cell, true);
>   this.size.addAndGet(-s);
> }
>   }
> {code}
> {code:title=KeyValue.java|borderStyle=solid}
>   @Override
>   public long heapSize() {
> return ClassSize.align(sum) +
> (offset == 0
>   ? ClassSize.sizeOf(bytes, length) // count both length and object 
> overhead
>   : length);// only count the number of bytes
>   }
> {code}
> The wrong heap size of store will block the HRegion#doClose because the 
> HRegion#memstoreSize will always be bigger than zero even if we flush the 
> store.
> {code:title=HRegion.java|borderStyle=solid}
> while (this.memstoreSize.get() > 0) {
>   try {
> if (flushCount++ > 0) {
>   int actualFlushes = flushCount - 1;
>   if (actualFlushes > 5) {
> // If we tried 5 times and are unable to clear memory, abort
> // so we do not lose data
> throw new DroppedSnapshotException("Failed clearing memory 
> after " +
>   actualFlushes + " attempts on region: " +
> Bytes.toStringBinary(getRegionInfo().getRegionName()));
>   }
>   LOG.info("Running extra flush, " + actualFlushes +
> " (carrying snapshot?) " + this);
> }
> internalFlushcache(status);
>   } catch (IOException ioe) {
> status.setStatus("Failed flush " + this + ", putting online 
> again");
> synchronized (writestate) {
>   writestate.writesEnabled = true;
> }
> // Have to throw to upper layers.  I can't abort server from here.
> throw ioe;
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17510) DefaultMemStore gets the wrong heap size after rollback

2017-01-23 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-17510:
-

 Summary: DefaultMemStore gets the wrong heap size after rollback
 Key: HBASE-17510
 URL: https://issues.apache.org/jira/browse/HBASE-17510
 Project: HBase
  Issue Type: Bug
Reporter: ChiaPing Tsai
 Fix For: 1.4.0


We should calculate the size of “found” rather than “cell” because the offset 
value may cause the difference heap size between “cell” and “found”.
{code:title=DefaultMemStore.java|borderStyle=solid}
  @Override
  public void rollback(Cell cell) {
// If the key is in the memstore, delete it. Update this.size.
found = this.cellSet.get(cell);
if (found != null && found.getSequenceId() == cell.getSequenceId()) {
  removeFromCellSet(cell);
  long s = heapSizeChange(cell, true);
  this.size.addAndGet(-s);
}
  }
{code}

{code:title=KeyValue.java|borderStyle=solid}
  @Override
  public long heapSize() {
return ClassSize.align(sum) +
(offset == 0
  ? ClassSize.sizeOf(bytes, length) // count both length and object 
overhead
  : length);// only count the number of bytes
  }
{code}

The wrong heap size of store will block the HRegion#doClose because the 
HRegion#memstoreSize will always be bigger than zero even if we flush the store.
{code:title=HRegion.java|borderStyle=solid}
while (this.memstoreSize.get() > 0) {
  try {
if (flushCount++ > 0) {
  int actualFlushes = flushCount - 1;
  if (actualFlushes > 5) {
// If we tried 5 times and are unable to clear memory, abort
// so we do not lose data
throw new DroppedSnapshotException("Failed clearing memory 
after " +
  actualFlushes + " attempts on region: " +
Bytes.toStringBinary(getRegionInfo().getRegionName()));
  }
  LOG.info("Running extra flush, " + actualFlushes +
" (carrying snapshot?) " + this);
}
internalFlushcache(status);
  } catch (IOException ioe) {
status.setStatus("Failed flush " + this + ", putting online again");
synchronized (writestate) {
  writestate.writesEnabled = true;
}
// Have to throw to upper layers.  I can't abort server from here.
throw ioe;
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17504) The passed durability of Increment is ignored when syncing WAL

2017-01-21 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17504:
--
Attachment: HBASE-17504.branch-1.v0.patch

> The passed durability of Increment is ignored when syncing WAL
> --
>
> Key: HBASE-17504
> URL: https://issues.apache.org/jira/browse/HBASE-17504
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: HBASE-17504.branch-1.v0.patch
>
>
> {code:title=HRegion.java|borderStyle=solid}
> private Result doIncrement(Increment increment, long nonceGroup, long nonce) 
> throws IOException {
> Durability effectiveDurability =
> getEffectiveDurability(increment.getDurability());
> ...
> if(txid != 0) {
>   syncOrDefer(txid, durability);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17504) The passed durability of Increment is ignored when syncing WAL

2017-01-21 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17504:
--
Assignee: ChiaPing Tsai
  Status: Patch Available  (was: Open)

> The passed durability of Increment is ignored when syncing WAL
> --
>
> Key: HBASE-17504
> URL: https://issues.apache.org/jira/browse/HBASE-17504
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: HBASE-17504.branch-1.v0.patch
>
>
> {code:title=HRegion.java|borderStyle=solid}
> private Result doIncrement(Increment increment, long nonceGroup, long nonce) 
> throws IOException {
> Durability effectiveDurability =
> getEffectiveDurability(increment.getDurability());
> ...
> if(txid != 0) {
>   syncOrDefer(txid, durability);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17504) The passed durability of Increment is ignored when syncing WAL

2017-01-21 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-17504:
-

 Summary: The passed durability of Increment is ignored when 
syncing WAL
 Key: HBASE-17504
 URL: https://issues.apache.org/jira/browse/HBASE-17504
 Project: HBase
  Issue Type: Bug
Reporter: ChiaPing Tsai
Priority: Minor
 Fix For: 1.4.0


{code:title=HRegion.java|borderStyle=solid}
private Result doIncrement(Increment increment, long nonceGroup, long nonce) 
throws IOException {
Durability effectiveDurability =
getEffectiveDurability(increment.getDurability());
...
if(txid != 0) {
  syncOrDefer(txid, durability);
}
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-20 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15832188#comment-15832188
 ] 

ChiaPing Tsai commented on HBASE-17488:
---

bq. Consider using the nice ./dev-tools/submit-patch.py in future. It will do 
proper formatting of the patch for you, makes the commiter's life a little 
easier getting stuff in, and it ensures you get credit for your work. Go easy.

copy that. Thanks for your reminder. [~stack]

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch, HBASE-17488.branch-1.v1.patch, 
> HBASE-17488.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-20 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15831547#comment-15831547
 ] 

ChiaPing Tsai commented on HBASE-17488:
---

[~stack] 
The patch is ready for review.
Would you please take a look? Thanks.

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch, HBASE-17488.branch-1.v1.patch, 
> HBASE-17488.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17494) Guard against cloning family of all cells if no data need be replicated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17494:
--
Fix Version/s: 1.4.0

> Guard against cloning family of all cells if no data need be replicated
> ---
>
> Key: HBASE-17494
> URL: https://issues.apache.org/jira/browse/HBASE-17494
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-17494.branch-1.v0.patch
>
>
> The replication is enabled by default, so we try to clone the family of all 
> cells even if there is no replication at all.
> {noformat}
>   family = CellUtil.cloneFamily(cell);
>   // Unexpected, has a tendency to happen in unit tests
>   assert htd.getFamily(family) != null;
>   if (!scopes.containsKey(family)) {
>   int scope = htd.getFamily(family).getScope();
>   if (scope != REPLICATION_SCOPE_LOCAL) {
>   scopes.put(family, scope);
>   }
>   }
> {noformat}
> HBASE-15205 had resolved this issue, but it is committed to master only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17494) Guard against cloning family of all cells if no data need be replicated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17494:
--
Attachment: HBASE-17494.branch-1.v0.patch

This issues can be resolved by some trivial changes 
(HBASE-17494.branch-1.v0.patch).

> Guard against cloning family of all cells if no data need be replicated
> ---
>
> Key: HBASE-17494
> URL: https://issues.apache.org/jira/browse/HBASE-17494
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Attachments: HBASE-17494.branch-1.v0.patch
>
>
> The replication is enabled by default, so we try to clone the family of all 
> cells even if there is no replication at all.
> {noformat}
>   family = CellUtil.cloneFamily(cell);
>   // Unexpected, has a tendency to happen in unit tests
>   assert htd.getFamily(family) != null;
>   if (!scopes.containsKey(family)) {
>   int scope = htd.getFamily(family).getScope();
>   if (scope != REPLICATION_SCOPE_LOCAL) {
>   scopes.put(family, scope);
>   }
>   }
> {noformat}
> HBASE-15205 had resolved this issue, but it is committed to master only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17494) Guard against cloning family of all cells if no data need be replicated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17494:
--
Status: Patch Available  (was: Open)

> Guard against cloning family of all cells if no data need be replicated
> ---
>
> Key: HBASE-17494
> URL: https://issues.apache.org/jira/browse/HBASE-17494
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Attachments: HBASE-17494.branch-1.v0.patch
>
>
> The replication is enabled by default, so we try to clone the family of all 
> cells even if there is no replication at all.
> {noformat}
>   family = CellUtil.cloneFamily(cell);
>   // Unexpected, has a tendency to happen in unit tests
>   assert htd.getFamily(family) != null;
>   if (!scopes.containsKey(family)) {
>   int scope = htd.getFamily(family).getScope();
>   if (scope != REPLICATION_SCOPE_LOCAL) {
>   scopes.put(family, scope);
>   }
>   }
> {noformat}
> HBASE-15205 had resolved this issue, but it is committed to master only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17494) Guard against cloning family of all cells if no data need be replicated

2017-01-19 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-17494:
-

 Summary: Guard against cloning family of all cells if no data need 
be replicated
 Key: HBASE-17494
 URL: https://issues.apache.org/jira/browse/HBASE-17494
 Project: HBase
  Issue Type: Improvement
Reporter: ChiaPing Tsai
Priority: Trivial


The replication is enabled by default, so we try to clone the family of all 
cells even if there is no replication at all.
{noformat}
  family = CellUtil.cloneFamily(cell);
  // Unexpected, has a tendency to happen in unit tests
  assert htd.getFamily(family) != null;

  if (!scopes.containsKey(family)) {
  int scope = htd.getFamily(family).getScope();
  if (scope != REPLICATION_SCOPE_LOCAL) {
  scopes.put(family, scope);
  }
  }
{noformat}

HBASE-15205 had resolved this issue, but it is committed to master only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Attachment: HBASE-17488.v0.patch

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch, HBASE-17488.branch-1.v1.patch, 
> HBASE-17488.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Status: Patch Available  (was: Open)

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch, HBASE-17488.branch-1.v1.patch, 
> HBASE-17488.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Status: Open  (was: Patch Available)

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch, HBASE-17488.branch-1.v1.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Attachment: HBASE-17488.branch-1.v1.patch

all failed tests pass locally.
v1 includes the following change.
If the corresponding mutation contains the SKIP_WAL, we shouldn't count the 
cells of returned mutation.

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch, HBASE-17488.branch-1.v1.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Status: Patch Available  (was: Open)

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch, HBASE-17488.branch-1.v1.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-19 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Status: Open  (was: Patch Available)

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Release Note: 
prevent creating unused objects in the WALEdit's construction.
+If the cp#preBatchMutate returns true, the WALEdit is useless. So we should 
create the WALEdit after step 2.
+The cells came from cp should be counted because they are added into the 
WALEdit . The use case is the local index of phoenix
+If the mutation contains the SKIP_WAL property, its cells aren't added into 
the WALEdit. So these cells shouldn't be counted.

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829420#comment-15829420
 ] 

ChiaPing Tsai commented on HBASE-17488:
---

bq. You have a patch for master branch?
Yes, i will attach the patch for master after the QA

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch, 
> HBASE-17488.branch-1.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829410#comment-15829410
 ] 

ChiaPing Tsai commented on HBASE-17488:
---

hi [~stack]
bq. What sort of benefit do you expect
prevent creating unused objects in the WALEdit's construction.

# If the cp#preBatchMutate returns true, the WALEdit is useless. So we should 
create the WALEdit after step 2.
# The cells came from cp should be counted because they are added into the 
WALEdit . The use case is the local index of phoenix 
# If the mutation contains the SKIP_WAL property, its cells aren't added into 
the WALEdit. So these cells shouldn't be counted.

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Assignee: ChiaPing Tsai
  Status: Patch Available  (was: Open)

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Attachment: HBASE-17488.branch-1.v0.patch

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17488.branch-1.v0.patch
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Attachment: HBASE-17488.v0.patch

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Attachment: (was: HBASE-17488.v0.patch)

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Fix Version/s: (was: 1.3.0)
   1.4.0

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Fix Version/s: (was: 1.4.0)
   1.3.0

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.3.0
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17488:
--
Fix Version/s: 1.4.0
   2.0.0

> WALEdit should be lazily instantiated
> -
>
> Key: HBASE-17488
> URL: https://issues.apache.org/jira/browse/HBASE-17488
> Project: HBase
>  Issue Type: Improvement
>Reporter: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.3.0
>
>
> Some trivial improvement.
> # create the WALEdit on step 3 instead of step 2
> # count the cells from coprocessor
> # don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17488) WALEdit should be lazily instantiated

2017-01-18 Thread ChiaPing Tsai (JIRA)
ChiaPing Tsai created HBASE-17488:
-

 Summary: WALEdit should be lazily instantiated
 Key: HBASE-17488
 URL: https://issues.apache.org/jira/browse/HBASE-17488
 Project: HBase
  Issue Type: Improvement
Reporter: ChiaPing Tsai
Priority: Trivial


Some trivial improvement.
# create the WALEdit on step 3 instead of step 2
# count the cells from coprocessor
# don’t count the mutations which contain the Durability.SKIP_WAL property



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17426) Inconsistent environment variable names for enabling JMX

2017-01-14 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15822893#comment-15822893
 ] 

ChiaPing Tsai commented on HBASE-17426:
---

Can i take this issue? Which one is better? Should the document be updated in 
the patch?

> Inconsistent environment variable names for enabling JMX
> 
>
> Key: HBASE-17426
> URL: https://issues.apache.org/jira/browse/HBASE-17426
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> In conf/hbase-env.sh :
> {code}
> # export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false 
> -Dcom.sun.management.jmxremote.authenticate=false"
> # If you want to configure BucketCache, specify '-XX: MaxDirectMemorySize=' 
> with proper direct memory size
> # export HBASE_THRIFT_OPTS="$HBASE_JMX_BASE 
> -Dcom.sun.management.jmxremote.port=10103"
> {code}
> But in bin/hbase-config.sh , a different variable is used:
> {code}
> # Thrift JMX opts
> if [ -n "$HBASE_JMX_OPTS" ] && [ -z "$HBASE_THRIFT_JMX_OPTS" ]; then
>   HBASE_THRIFT_JMX_OPTS="$HBASE_JMX_OPTS 
> -Dcom.sun.management.jmxremote.port=10103"
> fi
> {code}
> The variable names should be aligned for better user experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2017-01-09 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15811959#comment-15811959
 ] 

ChiaPing Tsai commented on HBASE-15600:
---

My cat messed up my keyboard... sorry for making noise.

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: phoenix
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2017-01-09 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai reassigned HBASE-15600:
-

Assignee: ChiaPing Tsai  (was: Rajeshbabu Chintaguntla)

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: ChiaPing Tsai
>  Labels: phoenix
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15600) Add provision for adding mutations to memstore or able to write to same region in batchMutate coprocessor hooks

2017-01-09 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-15600:
--
Assignee: Rajeshbabu Chintaguntla  (was: ChiaPing Tsai)

> Add provision for adding mutations to memstore or able to write to same 
> region in batchMutate coprocessor hooks
> ---
>
> Key: HBASE-15600
> URL: https://issues.apache.org/jira/browse/HBASE-15600
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: phoenix
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15600.patch, HBASE-15600_v1.patch, 
> HBASE-15600_v2.patch, hbase-15600_v3.patch, hbase-15600_v4.patch, 
> hbase-15600_v5.patch, hbase-15600_v6.patch
>
>
> As part of PHOENIX-1734 we need to write the index updates to same region 
> from coprocessors but writing from batchMutate API is not allowed because of 
> mvcc. 
> Raised PHOENIX-2742 to discuss any alternative way to write to the same 
> region directly or not but not having any proper solution there.
> Currently we have provision to write wal edits from coprocessors. We can set 
> wal edits in MiniBatchOperationInProgress.
> {noformat}
>   /**
>* Sets the walEdit for the operation(Mutation) at the specified position.
>* @param index
>* @param walEdit
>*/
>   public void setWalEdit(int index, WALEdit walEdit) {
> this.walEditsFromCoprocessors[getAbsoluteIndex(index)] = walEdit;
>   }
> {noformat}
> Similarly we can allow to write mutations from coprocessors to memstore as 
> well. Or else we should provide the batch mutation API allow write in batch 
> mutate coprocessors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-06 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807035#comment-15807035
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

Can we have a separate JIRA to enhance that?

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803286#comment-15803286
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

bq. Should we do a follow up for the server side as well
Do you mean the servers process the partial rows, and then return exception for 
making client retry the remaining rows?

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802009#comment-15802009
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

programmers work at night :D

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Patch Available  (was: Open)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Attachment: HBASE-17408.v2.patch

address [~yuzhih...@gmail.com]'s comment on v2

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Open  (was: Patch Available)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801903#comment-15801903
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

copy that

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801844#comment-15801844
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

bq. Please move the above check immediately below where this.maxRowsPerRequest 
is assigned.
copy that.

bq. Why is 1 used in the last line above ?
It means that an extra row is accepted, so we increment the row count by one.

bq. rowSize has no effect ?
Yes, RequestRowsChecker only consider the number of rows. The heap size of row 
(rowSize) is useless for RequestRowsChecker.

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Attachment: HBASE-17408.v1.patch

v1 adds trivial change in the hbase-server module. 

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Patch Available  (was: Open)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Open  (was: Patch Available)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
 Assignee: ChiaPing Tsai
Fix Version/s: 2.0.0
Affects Version/s: 2.0.0
   Status: Patch Available  (was: Open)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Attachment: HBASE-17408.v0.patch

The default threshold for rows is 2k. Any suggestions are welcomed.
ping [~yuzhih...@gmail.com], would you please take a look? thanks.


> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-03 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795722#comment-15795722
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

bq. We should consider adding per request limit through the number of mutations 
in a batch.
Why it causes the OOME?

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17403) ClientAsyncPrefetchScanner doesn’t load any data if the MaxResultSize is too small

2017-01-03 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15795724#comment-15795724
 ] 

ChiaPing Tsai commented on HBASE-17403:
---

Thanks for your review

> ClientAsyncPrefetchScanner doesn’t load any data if the MaxResultSize is too 
> small
> --
>
> Key: HBASE-17403
> URL: https://issues.apache.org/jira/browse/HBASE-17403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17403.v0.patch
>
>
> Don't assign the value of zero to the any threshold.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17403) ClientAsyncPrefetchScanner doesn’t load any data if the MaxResultSize is too small

2017-01-02 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17403:
--
Attachment: HBASE-17403.v0.patch

> ClientAsyncPrefetchScanner doesn’t load any data if the MaxResultSize is too 
> small
> --
>
> Key: HBASE-17403
> URL: https://issues.apache.org/jira/browse/HBASE-17403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17403.v0.patch
>
>
> Don't assign the value of zero to the any threshold.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17403) ClientAsyncPrefetchScanner doesn’t load any data if the MaxResultSize is too small

2017-01-02 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17403:
--
Assignee: ChiaPing Tsai
  Status: Patch Available  (was: Open)

> ClientAsyncPrefetchScanner doesn’t load any data if the MaxResultSize is too 
> small
> --
>
> Key: HBASE-17403
> URL: https://issues.apache.org/jira/browse/HBASE-17403
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17403.v0.patch
>
>
> Don't assign the value of zero to the any threshold.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >