[ 
https://issues.apache.org/jira/browse/DERBY-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513001
 ] 

Knut Anders Hatlen commented on DERBY-2936:
-------------------------------------------

Thanks for looking at the patch, Bryan!

Yes, the ByteBuffer methods do the masking and shifting automatically for us.

In the first diff, "(byte) (value & 0xff)" is actually identical to "(byte) 
value" since & 0xff masks away all but the eight least significant bits and the 
cast to byte only looks at the eight least significant bits, which were not 
affected by the mask, so the masking was not required in the first place.

In the second diff, the original code manually encoded an int as a two-byte 
big-endian byte sequence (and I think the same argument about masking goes for 
this code). ByteBuffer can read/write both big-endian byte order and 
little-endian byte order; the default is big-endian. So if you pass in an int 
consisting of the following bits: xxxxxxxxyyyyyyyyzzzzzzzzwwwwwwww, the old 
code would do

  1. right shift (without preserving sign bit) so that the int becomes 
00000000xxxxxxxxyyyyyyyyzzzzzzzz
  2. mask away three most significant bytes from (1), which gives this int: 
000000000000000000000000zzzzzzzz
  3. store the eight least significant bits (zzzzzzzz) in bytes[offset]
  4. mask away the 3 most significant bytes from the original int: 
0000000000000000000000000wwwwwwww
  5. store the least significant byte of (4) in bytes[offset+1]: wwwwwwww

The new code does this:

  1. cast original int to short, discarding the two most significant bytes. Bit 
pattern for the short: zzzzzzzzwwwwwwww
  2. store the short in two bytes, big-endian byte order, that is
        - first byte: zzzzzzzz
        - second byte: wwwwwwww

Note that it's the bit patterns of the ints/shorts/bytes that are interesting, 
not the actual values of the bytes. So it's perfectly fine to encode a positive 
int as two negative byte values.

If we decode the values with a ByteBuffer (that would be in DDMReader) we might 
have to do some masking, depending on whether we see them as signed or unsigned 
shorts. Since a Java short is signed, the code for reading an unsigned short 
from a byte buffer would look like this:

  int ushort = 0xffff & buffer.getShort();

> Use java.nio.ByteBuffer for buffering in DDMWriter
> --------------------------------------------------
>
>                 Key: DERBY-2936
>                 URL: https://issues.apache.org/jira/browse/DERBY-2936
>             Project: Derby
>          Issue Type: Improvement
>          Components: Network Server
>            Reporter: Knut Anders Hatlen
>            Assignee: Knut Anders Hatlen
>            Priority: Minor
>         Attachments: d2936-1.diff
>
>
> org.apache.derby.impl.drda.DDMWriter uses a byte array as a buffer. Wrapping 
> the array in a java.nio.ByteBuffer has some advantages, for instance:
>   - utility methods for encoding primitive types into the byte array could be 
> used instead of manually encoding the values
>   - it allows us to encode strings directly into the buffer (using a 
> CharsetEncoder) without doing an expensive String.getBytes(String encoding) 
> in an intermediate step
> By using a utility class, the code becomes easier to maintain. Also, 
> ByteBuffer allows us to access the backing byte array without going through 
> the ByteBuffer interface, so we still have the possibility to modify the byte 
> array directly in cases where that's more convenient.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to