spark git commit: [SPARK-23391][CORE] It may lead to overflow for some integer multiplication

2018-02-12 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 169483455 -> 14b5dbfa9


[SPARK-23391][CORE] It may lead to overflow for some integer multiplication

In the `getBlockData`,`blockId.reduceId` is the `Int` type, when it is greater 
than 2^28, `blockId.reduceId*8` will overflow
In the `decompress0`, `len` and  `unitSize` are  Int type, so `len * unitSize` 
may lead to  overflow
N/A

Author: liuxian 

Closes #20581 from 10110346/overflow2.

(cherry picked from commit 4a4dd4f36f65410ef5c87f7b61a960373f044e61)
Signed-off-by: Sean Owen 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/14b5dbfa
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/14b5dbfa
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/14b5dbfa

Branch: refs/heads/branch-2.2
Commit: 14b5dbfa9a5ef9555ef9072ff0639985fcf57118
Parents: 1694834
Author: liuxian 
Authored: Mon Feb 12 08:49:45 2018 -0600
Committer: Sean Owen 
Committed: Mon Feb 12 08:52:39 2018 -0600

--
 .../org/apache/spark/shuffle/IndexShuffleBlockResolver.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/14b5dbfa/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala 
b/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
index 2414b94..449f602 100644
--- 
a/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
+++ 
b/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
@@ -203,13 +203,13 @@ private[spark] class IndexShuffleBlockResolver(
 // class of issue from re-occurring in the future which is why they are 
left here even though
 // SPARK-22982 is fixed.
 val channel = Files.newByteChannel(indexFile.toPath)
-channel.position(blockId.reduceId * 8)
+channel.position(blockId.reduceId * 8L)
 val in = new DataInputStream(Channels.newInputStream(channel))
 try {
   val offset = in.readLong()
   val nextOffset = in.readLong()
   val actualPosition = channel.position()
-  val expectedPosition = blockId.reduceId * 8 + 16
+  val expectedPosition = blockId.reduceId * 8L + 16
   if (actualPosition != expectedPosition) {
 throw new Exception(s"SPARK-22982: Incorrect channel position after 
index file reads: " +
   s"expected $expectedPosition but actual position was 
$actualPosition.")


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23391][CORE] It may lead to overflow for some integer multiplication

2018-02-12 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 1e3118c2e -> d31c4ae7b


[SPARK-23391][CORE] It may lead to overflow for some integer multiplication

## What changes were proposed in this pull request?
In the `getBlockData`,`blockId.reduceId` is the `Int` type, when it is greater 
than 2^28, `blockId.reduceId*8` will overflow
In the `decompress0`, `len` and  `unitSize` are  Int type, so `len * unitSize` 
may lead to  overflow
## How was this patch tested?
N/A

Author: liuxian 

Closes #20581 from 10110346/overflow2.

(cherry picked from commit 4a4dd4f36f65410ef5c87f7b61a960373f044e61)
Signed-off-by: Sean Owen 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d31c4ae7
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d31c4ae7
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d31c4ae7

Branch: refs/heads/branch-2.3
Commit: d31c4ae7ba734356c849347b9a7b448da9a5a9a1
Parents: 1e3118c
Author: liuxian 
Authored: Mon Feb 12 08:49:45 2018 -0600
Committer: Sean Owen 
Committed: Mon Feb 12 08:49:52 2018 -0600

--
 .../org/apache/spark/shuffle/IndexShuffleBlockResolver.scala | 4 ++--
 .../sql/execution/columnar/compression/compressionSchemes.scala  | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/d31c4ae7/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala 
b/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
index 2414b94..449f602 100644
--- 
a/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
+++ 
b/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
@@ -203,13 +203,13 @@ private[spark] class IndexShuffleBlockResolver(
 // class of issue from re-occurring in the future which is why they are 
left here even though
 // SPARK-22982 is fixed.
 val channel = Files.newByteChannel(indexFile.toPath)
-channel.position(blockId.reduceId * 8)
+channel.position(blockId.reduceId * 8L)
 val in = new DataInputStream(Channels.newInputStream(channel))
 try {
   val offset = in.readLong()
   val nextOffset = in.readLong()
   val actualPosition = channel.position()
-  val expectedPosition = blockId.reduceId * 8 + 16
+  val expectedPosition = blockId.reduceId * 8L + 16
   if (actualPosition != expectedPosition) {
 throw new Exception(s"SPARK-22982: Incorrect channel position after 
index file reads: " +
   s"expected $expectedPosition but actual position was 
$actualPosition.")

http://git-wip-us.apache.org/repos/asf/spark/blob/d31c4ae7/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
index 79dcf3a..00a1d54 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
@@ -116,7 +116,7 @@ private[columnar] case object PassThrough extends 
CompressionScheme {
   while (pos < capacity) {
 if (pos != nextNullIndex) {
   val len = nextNullIndex - pos
-  assert(len * unitSize < Int.MaxValue)
+  assert(len * unitSize.toLong < Int.MaxValue)
   putFunction(columnVector, pos, bufferPos, len)
   bufferPos += len * unitSize
   pos += len


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23391][CORE] It may lead to overflow for some integer multiplication

2018-02-12 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/master 0e2c266de -> 4a4dd4f36


[SPARK-23391][CORE] It may lead to overflow for some integer multiplication

## What changes were proposed in this pull request?
In the `getBlockData`,`blockId.reduceId` is the `Int` type, when it is greater 
than 2^28, `blockId.reduceId*8` will overflow
In the `decompress0`, `len` and  `unitSize` are  Int type, so `len * unitSize` 
may lead to  overflow
## How was this patch tested?
N/A

Author: liuxian 

Closes #20581 from 10110346/overflow2.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/4a4dd4f3
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/4a4dd4f3
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/4a4dd4f3

Branch: refs/heads/master
Commit: 4a4dd4f36f65410ef5c87f7b61a960373f044e61
Parents: 0e2c266
Author: liuxian 
Authored: Mon Feb 12 08:49:45 2018 -0600
Committer: Sean Owen 
Committed: Mon Feb 12 08:49:45 2018 -0600

--
 .../org/apache/spark/shuffle/IndexShuffleBlockResolver.scala | 4 ++--
 .../sql/execution/columnar/compression/compressionSchemes.scala  | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/4a4dd4f3/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala 
b/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
index d88b25c..d3f1c7e 100644
--- 
a/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
+++ 
b/core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala
@@ -202,13 +202,13 @@ private[spark] class IndexShuffleBlockResolver(
 // class of issue from re-occurring in the future which is why they are 
left here even though
 // SPARK-22982 is fixed.
 val channel = Files.newByteChannel(indexFile.toPath)
-channel.position(blockId.reduceId * 8)
+channel.position(blockId.reduceId * 8L)
 val in = new DataInputStream(Channels.newInputStream(channel))
 try {
   val offset = in.readLong()
   val nextOffset = in.readLong()
   val actualPosition = channel.position()
-  val expectedPosition = blockId.reduceId * 8 + 16
+  val expectedPosition = blockId.reduceId * 8L + 16
   if (actualPosition != expectedPosition) {
 throw new Exception(s"SPARK-22982: Incorrect channel position after 
index file reads: " +
   s"expected $expectedPosition but actual position was 
$actualPosition.")

http://git-wip-us.apache.org/repos/asf/spark/blob/4a4dd4f3/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
index 79dcf3a..00a1d54 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
@@ -116,7 +116,7 @@ private[columnar] case object PassThrough extends 
CompressionScheme {
   while (pos < capacity) {
 if (pos != nextNullIndex) {
   val len = nextNullIndex - pos
-  assert(len * unitSize < Int.MaxValue)
+  assert(len * unitSize.toLong < Int.MaxValue)
   putFunction(columnVector, pos, bufferPos, len)
   bufferPos += len * unitSize
   pos += len


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org