Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-167393002
@srowen I have new another pull request. This pull request will be closed.
another pull request https://github.com/apache/spark/pull/10487
---
If your project is
Github user QiangCai closed the pull request at:
https://github.com/apache/spark/pull/10310
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-167117473
@srowen I maybe have made a mistake in git bash. Can I new another pull
request to resolved this problem.
---
If your project is set up for it, you can reply to this
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-167118628
I think you just need to rebase from master and force-push the result, but
do what you need to.
---
If your project is set up for it, you can reply to this email and
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-166918642
@QiangCai thanks that looks good, but this needs a rebase now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-166912120
@srowen I have modified all the code, and try to keep them to be the same.
At first, the var numPartsToTry and partsScanned have been set to Long.
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-166096808
@QiangCai I think that technically `var partsScanned = 0` in `take` is a
problem since it's incremented by `numPartsToTry` and could overflow causing
`partsScanned <
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165964135
Yes. I will check all at first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165946596
Yes. I will change all.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165376341
Can you also change take in RDD?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165045403
**[Test build #2220 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2220/consoleFull)**
for PR 10310 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165026607
**[Test build #2220 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2220/consoleFull)**
for PR 10310 at commit
Github user 3ourroom commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-164752595
NAVER - http://www.naver.com/
3ourr...@naver.com ëê» ë³´ë´ì ë©ì¼ ì´ ë¤ìê³¼
ê°ì ì´ì ë¡
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-164752297
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10310#discussion_r47633144
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -206,7 +206,7 @@ abstract class SparkPlan extends
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-164758610
**[Test build #2217 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2217/consoleFull)**
for PR 10310 at commit
GitHub user QiangCai opened a pull request:
https://github.com/apache/spark/pull/10310
[SPARK-12340][SQL] Fix overstep the bounds of Int in SparkPlan.executeTake
Modifies partsScanned to partsScanned.toLong and chang result of math.min
to Int.
You can merge this pull request into
Github user 3ourroom commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-164752088
NAVER - http://www.naver.com/
3ourr...@naver.com ëê» ë³´ë´ì ë©ì¼ <[spark] [SPARK-12340][SQL] Fix
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-164785707
**[Test build #2217 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2217/consoleFull)**
for PR 10310 at commit
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-164967364
Tthe number of partitions which has scanned is the size of seq p, not
numPartsToTry.
---
If your project is set up for it, you can reply to this email and have
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10310#issuecomment-165026356
Yes I think that also makes sense, in the case where we hit the limit of
`totalParts`.
---
If your project is set up for it, you can reply to this email and have your
21 matches
Mail list logo