[
https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234027#comment-14234027
]
Yi Liu edited comment on HADOOP-11343 at 12/4/14 10:27 AM:
-----------------------------------------------------------
Hi guys, I take an hour to look at this issue afternoon. Firstly it's a good
catch from Jerry.
In our code, we say for IV, we have two parts: the first 8 bytes, and the
second 8 bytes (counter part).
*1.*
{code}
/* NOTE: the IV/counter CTR mode is big-endian.*/
/* increment counter (128-bit int) by 1 */
static void ctr128_inc(unsigned char *counter) {
u32 n=16;
u8 c;
do {
--n;
c = counter[n];
++c;
counter[n] = c;
if (c) return;
} while (n);
}
{code}
Above is OpenSSL implementation about counter increment.
Here the counter is actually the IV (8 bytes IV part and 8 bytes counter part
as our code).
It's big-endian and handle overflow for whole 16 bytes.
{code}
Increment the counter value.
188
189 private static void More ...increment(byte[] b) {
190 int n = b.length - 1;
191 while ((n >= 0) && (++b[n] == 0)) {
192 n--;
193 }
194 }
{code}
This snippet code is JCE implementation about counter increment.
Here {{byte[] b}} is the counter, is actually the IV (8 bytes IV part and 8
bytes counter part as our code).
It's big-endian and handle overflow for whole 16 bytes.
So we can see both of them are exactly same and handle the overflow for whole
16 bytes (As Mike said:)). We need increase 1 to the first 8 bytes if overflow
happen when we add the second 8 bytes with counter (l+counter), currently we
haven't.
*2*.
About the increase 1 to the first 8 bytes when there is overflow for the
counter part calculation, I don't like the current patch. It can be implemented
more clear.
Some obvious error in current patch:
{code}
+ // Convert the counter to bytes in big-endian byte order
+ for (int i = 7; i > 0; i--) {
+ counterBytes[i] = (byte) counter;
+ counter >>>= 8;
}
{code}
only loop 7 times, I wonder how it successes for the test.
*3*.
For the compatibility, I think there is no way. For example we call
{{calculateIV}} and overflow happens. But when read, we don't call
{{calculateIV}} at the same position. The decryption result will be always
wrong.
But I agree to add new SUITE to reject old DFSClient writing data to new
version HDFS...
So maybe we can only fix it...
was (Author: hitliuyi):
Hi guys, I take an hour to look at this issue afternoon. Firstly it's a good
catch from Jerry.
In our code, we say for IV, we have two parts: the first 8 bytes, and the
second 8 bytes (counter part).
*1.*
{code}
/* NOTE: the IV/counter CTR mode is big-endian.*/
/* increment counter (128-bit int) by 1 */
static void ctr128_inc(unsigned char *counter) {
u32 n=16;
u8 c;
do {
--n;
c = counter[n];
++c;
counter[n] = c;
if (c) return;
} while (n);
}
{code}
Above is OpenSSL implementation about counter increment.
Here the counter is actually the IV (8 bytes IV part and 8 bytes counter part
as our code).
It's big-endian and handle overflow for whole 16 bytes.
{code}
Increment the counter value.
188
189 private static void More ...increment(byte[] b) {
190 int n = b.length - 1;
191 while ((n >= 0) && (++b[n] == 0)) {
192 n--;
193 }
194 }
{code}
This snippet code is JCE implementation about counter increment.
Here {{byte[] b}} is the counter, is actually the IV (8 bytes IV part and 8
bytes counter part as our code).
It's big-endian and handle overflow for whole 16 bytes.
So we can see both of them are exactly same and handle the overflow for whole
16 bytes (As Mike said:)). We need increase 1 to the first 8 bytes if overflow
happen when we add the second 8 bytes with counter (l+counter), currently we
haven't.
*2*.
About the increase 1 to the first 8 bytes, I don't like the current patch. It
can be implemented more clear.
Some obvious error in current patch:
{code}
+ // Convert the counter to bytes in big-endian byte order
+ for (int i = 7; i > 0; i--) {
+ counterBytes[i] = (byte) counter;
+ counter >>>= 8;
}
{code}
only loop 7 times, I wonder how it successes for the test.
*3*.
For the compatibility, I think there is no way. For example we call
{{calculateIV}} and overflow happens. But when read, we don't call
{{calculateIV}} at the same position. The decryption result will be always
wrong.
But I agree to add new SUITE to reject old DFSClient writing data to new
version HDFS...
So maybe we can only fix it...
> Overflow is not properly handled in caclulating final iv for AES CTR
> --------------------------------------------------------------------
>
> Key: HADOOP-11343
> URL: https://issues.apache.org/jira/browse/HADOOP-11343
> Project: Hadoop Common
> Issue Type: Bug
> Components: security
> Affects Versions: 2.6.0
> Reporter: Jerry Chen
> Assignee: Jerry Chen
> Priority: Blocker
> Attachments: HADOOP-11343.patch
>
>
> In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16
> bytes,
> final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()];
> cc.generateSecureRandom(iv);
> Then the following calculation of iv and counter on 8 bytes (64bit) space
> would easily cause overflow and this overflow gets lost. The result would be
> the 128 bit data block was encrypted with a wrong counter and cannot be
> decrypted by standard aes-ctr.
> /**
> * The IV is produced by adding the initial IV to the counter. IV length
> * should be the same as {@link #AES_BLOCK_SIZE}
> */
> @Override
> public void calculateIV(byte[] initIV, long counter, byte[] IV) {
> Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
> Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
>
> System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
> long l = 0;
> for (int i = 0; i < 8; i++) {
> l = ((l << 8) | (initIV[CTR_OFFSET + i] & 0xff));
> }
> l += counter;
> IV[CTR_OFFSET + 0] = (byte) (l >>> 56);
> IV[CTR_OFFSET + 1] = (byte) (l >>> 48);
> IV[CTR_OFFSET + 2] = (byte) (l >>> 40);
> IV[CTR_OFFSET + 3] = (byte) (l >>> 32);
> IV[CTR_OFFSET + 4] = (byte) (l >>> 24);
> IV[CTR_OFFSET + 5] = (byte) (l >>> 16);
> IV[CTR_OFFSET + 6] = (byte) (l >>> 8);
> IV[CTR_OFFSET + 7] = (byte) (l);
> }
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)