Github user QiangCai closed the pull request at:
https://github.com/apache/spark/pull/10619
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10619#issuecomment-169700777
@sarutak I will do it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-169369194
OK. I have created another PR https://github.com/apache/spark/pull/10619.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user QiangCai opened a pull request:
https://github.com/apache/spark/pull/10619
[SPARK-12340][SQL]fix Int overflow in the SparkPlan.executeTake, RDD.take
and AsyncRDDActions.takeAsync for branch-1.6
I create this PR to merge this code into branch-1.6. And I have merged
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10619#issuecomment-169502987
spark-12340 has passed this test, but just other error has happend.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-169063131
@sarutak Maybe we have found another bug. I will try to fix it.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-169070917
I have remove the initial size of the ArrayBuffer instance. And the default
size is 16.
---
If your project is set up for it, you can reply to this email and have
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-169056589
@srowen I find error message in test bulid log, a OutOfMemoryError
exception has happened. The code in 71 line of the file AsyncRDDActions.scala
is "val re
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/spark/pull/10562#discussion_r48868638
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2067,4 +2067,16 @@ class SQLQuerySuite extends QueryTest
Github user QiangCai commented on a diff in the pull request:
https://github.com/apache/spark/pull/10562#discussion_r48844685
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2028,6 +2028,7 @@ class SQLQuerySuite extends QueryTest
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-168997795
@srowen I have rebased from master and resolved all conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-169191091
I think I have resolved this problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-168863650
@srowen I have no idea how to resolve this error. Would you help me?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user QiangCai closed the pull request at:
https://github.com/apache/spark/pull/10562
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-168674310
@srowen I have removed some whitespaces.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user QiangCai reopened a pull request:
https://github.com/apache/spark/pull/10562
[SPARK-12340][SQL]fix Int overflow in the SparkPlan.executeTake, RDD.take
and AsyncRDDActions.takeAsync
I have closed pull request https://github.com/apache/spark/pull/10487. And
I create
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10562#issuecomment-168564772
I have removed some blank lines.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10487#issuecomment-168500336
I have created another pull request
https://github.com/apache/spark/pull/10562.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user QiangCai closed the pull request at:
https://github.com/apache/spark/pull/10487
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user QiangCai opened a pull request:
https://github.com/apache/spark/pull/10562
[SPARK-12340][SQL]fix Int overflow in the SparkPlan.executeTake, RDD.take
and AsyncRDDActions.takeAsync
I have close pull request https://github.com/apache/spark/pull/10487. And I
create
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10487#issuecomment-168467402
@sarutak When I am rebasing from master, I get many conflicts. I don't
known how to resolve it. I have just pushed the commit "merge".
---
If yo
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10487#issuecomment-167709123
@sarutak I will try to add test cases.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-167393002
@srowen I have new another pull request. This pull request will be closed.
another pull request https://github.com/apache/spark/pull/10487
---
If your project
GitHub user QiangCai opened a pull request:
https://github.com/apache/spark/pull/10487
[SPARK-12340][SQL]fix Int overflow in the SparkPlan.executeTake, RDD.take
and AsyncRDDActions.takeAsync
@srowen I new this pull request to to resolve the problem.
another pull request
Github user QiangCai closed the pull request at:
https://github.com/apache/spark/pull/10310
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-167117473
@srowen I maybe have made a mistake in git bash. Can I new another pull
request to resolved this problem.
---
If your project is set up for it, you can reply
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-166912120
@srowen I have modified all the code, and try to keep them to be the same.
At first, the var numPartsToTry and partsScanned have been set to Long
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165964135
Yes. I will check all at first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165946596
Yes. I will change all.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user QiangCai opened a pull request:
https://github.com/apache/spark/pull/10310
[SPARK-12340][SQL] Fix overstep the bounds of Int in SparkPlan.executeTake
Modifies partsScanned to partsScanned.toLong and chang result of math.min
to Int.
You can merge this pull request
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-164967364
Tthe number of partitions which has scanned is the size of seq p, not
numPartsToTry.
---
If your project is set up for it, you can reply to this email and have
31 matches
Mail list logo