[
https://issues.apache.org/jira/browse/HBASE-20197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16400461#comment-16400461
]
BELUGA BEHR commented on HBASE-20197:
-------------------------------------
New patch...
# Hopefully fixed check-style error (my local 'mvn checkstyle:check' run does
not report anything
# Changed buffer size back to 4K
# Made it lazy
# Made a trivial change to hbase-server module
I did not use the BBUtils API. It is superfluous overhead that is not required.
I am using the {{ByteBuffer}} [relative bulk get
method|https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#get-byte:A-int-int-]
which is a built-in facility. There is no need to add custom code to
replicate this same behavior.
If you trace the BBUtils API, what you see is this code:
{code:java}
public static void copyFromBufferToArray(byte[] out, ByteBuffer in, int
sourceOffset,
int destinationOffset, int length) {
if (in.hasArray()) {
System.arraycopy(in.array(), sourceOffset + in.arrayOffset(), out,
destinationOffset, length);
} else if (UNSAFE_AVAIL) {
UnsafeAccess.copy(in, sourceOffset, out, destinationOffset, length);
} else {
ByteBuffer inDup = in.duplicate();
inDup.position(sourceOffset);
inDup.get(out, destinationOffset, length);
}
}
{code}
We are using a ByteBuffer here, which is not read-only, so it actually hits on
the first condition and executes this code:
{quote}System.arraycopy(in.array(), sourceOffset + in.arrayOffset(), out,
destinationOffset, length);
{quote}
Which is almost exactly what the {{ByteBuffer}} relative bulk get method does
anyway, so there is no savings here, just overheard and complexity.
In regards to the second condition... there is a bug there that I just noticed.
{code:java|title=org.apache.hadoop.hbase.util.UnsafeAccess}
public static void copy(ByteBuffer src, int srcOffset, byte[] dest, int
destOffset,
int length) {
long srcAddress = srcOffset;
Object srcBase = null;
if (src.isDirect()) {
srcAddress = srcAddress + ((DirectBuffer) src).address();
} else {
srcAddress = srcAddress + BYTE_ARRAY_BASE_OFFSET + src.arrayOffset();
srcBase = src.array();
}
long destAddress = destOffset + BYTE_ARRAY_BASE_OFFSET;
unsafeCopy(srcBase, srcAddress, dest, destAddress, length);
}
{code}
This issue here is the
[arrayOffset()|https://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html#arrayOffset--]
call. The JavaDocs here say:
{quote}Invoke the hasArray method before invoking this method in order to
ensure that this buffer has an accessible backing array.
{quote}
However, as we saw in the previous method, if _hasArray_ returns true, we do
_System.arraycopy,_ so the only reason we would be in this _copy_ code is if
there was no access to the backing array, yet here it is, depending on it
having such access. That could cause problems with Read-Only ByteBuffers that
does not affect the _relative bulk get method_.
{code:java}
public class Test {
public static void main(String[] args) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ByteBufferWriterOutputStream bbwos = new ByteBufferWriterOutputStream(baos);
ByteBuffer bbSmall = ByteBuffer.wrap(new byte[512]).asReadOnlyBuffer();
bbwos.write(bbSmall, 0, 512);
bbwos.close();
}
}
Exception in thread "main" java.nio.ReadOnlyBufferException
at java.nio.ByteBuffer.arrayOffset(ByteBuffer.java:1024)
at org.apache.hadoop.hbase.util.UnsafeAccess.copy(UnsafeAccess.java:398)
at
org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:54)
at
org.apache.hadoop.hbase.io.ByteBufferWriterOutputStream.write(ByteBufferWriterOutputStream.java:59)
at org.apache.hadoop.hbase.io.Test.main(Test.java:14)
{code}
> Review of ByteBufferWriterOutputStream.java
> -------------------------------------------
>
> Key: HBASE-20197
> URL: https://issues.apache.org/jira/browse/HBASE-20197
> Project: HBase
> Issue Type: Improvement
> Components: hbase
> Affects Versions: 2.0.0
> Reporter: BELUGA BEHR
> Assignee: BELUGA BEHR
> Priority: Minor
> Attachments: HBASE-20197.1.patch, HBASE-20197.2.patch,
> HBASE-20197.3.patch, HBASE-20197.4.patch
>
>
> In looking at this class, two things caught my eye.
> # Default buffer size of 4K
> # Re-sizing of buffer on demand
>
> Java's {{BufferedOutputStream}} uses an internal buffer size of 8K on modern
> JVMs. This is due to various bench-marking that showed optimal performance
> at this level.
> The Re-sizing buffer looks a bit "unsafe":
>
> {code:java}
> public void write(ByteBuffer b, int off, int len) throws IOException {
> byte[] buf = null;
> if (len > TEMP_BUF_LENGTH) {
> buf = new byte[len];
> } else {
> if (this.tempBuf == null) {
> this.tempBuf = new byte[TEMP_BUF_LENGTH];
> }
> buf = this.tempBuf;
> }
> ...
> }
> {code}
> If this method gets one call with a 'len' of 4000, then 4001, then 4002, then
> 4003, etc. then the 'tempBuf' will be re-created many times. Also, it seems
> unsafe to create a buffer as large as the 'len' input. This could
> theoretically lead to an internal buffer of 2GB for each instance of this
> class.
> I propose:
> # Increase the default buffer size to 8K
> # Create the buffer once and chunk the output instead of loading data into a
> single array and writing it to the output stream.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)