GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/18032
[SPARK-20806][DEPLOY] Launcher: redundant check for Spark lib dir
## What changes were proposed in this pull request?
Remove redundant check for libdir in CommandBuilderUtils
##
Github user ala commented on a diff in the pull request:
https://github.com/apache/spark/pull/18030#discussion_r117433160
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/GenerateUnsafeProjection.scala
---
@@ -50,10 +50,15 @@ object
GitHub user kiszk opened a pull request:
https://github.com/apache/spark/pull/18033
Add compression/decompression of column data to ColumnVector
## What changes were proposed in this pull request?
This PR adds compression/decompression of column data to `ColumnVector`.
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17936
How much difference this performs, compared with caching the two RDDs
before doing cartesian?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17880
I have modify `Scala style`.
Test is not started, could you help trigger it,thanks @HyukjinKwon
@gatorsmile
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18033
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18034#discussion_r117443669
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAModel.scala ---
@@ -468,7 +469,16 @@ object LocalLDAModel extends Loader[LocalLDAModel]
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18034
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17455
**[Test build #3730 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3730/testReport)**
for PR 17455 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18035
**[Test build #77094 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77094/testReport)**
for PR 18035 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17936
I agreed with @srowen. This adds quite complexity. If there is no much
difference comparing with caching RDDs before doing cartesian (or other ways),
it may not worth such complexity.
---
If your
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18031#discussion_r117443085
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -193,8 +219,27 @@ private[spark] object HighlyCompressedMapStatus {
}
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/17936
`Broadcast` should first fetch the all block to driver, and cached in the
local, then the executor fetch it from the driver. I think it's really time
consuming.
---
If your project is set up for
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17868
**[Test build #3734 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3734/testReport)**
for PR 17868 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17940
**[Test build #3733 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3733/testReport)**
for PR 17940 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17869
**[Test build #3736 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3736/testReport)**
for PR 17869 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18013
**[Test build #3735 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3735/testReport)**
for PR 18013 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17936
@viirya , this is slightly different from caching RDD. It is more like
broadcasting, the final state is that each executor will hold the whole data of
RDD2, the difference is that this is
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/17936
Sorry for the mistake, this test result should be the cached situation:
| --| -- | -- |
| 15.877s | 2827.373s | 178x |
| 16.781s | 2809.502s | 167x |
| 16.320s |
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/17698
@gatorsmile I have added test cases to the file `cast.sql` , thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17936
@jerryshao As you mentioned broadcasting, another question might be, can we
just use broadcasting to achieve similar performance without such changes?
---
If your project is set up for it, you can
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17992
**[Test build #3732 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3732/testReport)**
for PR 17992 at commit
GitHub user yanboliang opened a pull request:
https://github.com/apache/spark/pull/18035
[MINOR][SPARKR][ML] Fix coefficients issue and code cleanup for SparkR
linear SVM.
## What changes were proposed in this pull request?
Fix coefficients issue and code cleanup for SparkR
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18014
@cloud-fan What would you think?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/17936
OK, I'll add it. From the test data, performance is still very obvious.
Mainly from the network and disk overhead.
---
If your project is set up for it, you can reply to this email and have your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18030
**[Test build #77090 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77090/testReport)**
for PR 18030 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18031#discussion_r117440029
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -193,8 +219,27 @@ private[spark] object HighlyCompressedMapStatus {
}
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/18031#discussion_r117440204
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -121,48 +126,69 @@ private[spark] class CompressedMapStatus(
}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18032
**[Test build #77089 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77089/testReport)**
for PR 18032 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17936#discussion_r117429928
--- Diff: core/src/main/scala/org/apache/spark/rdd/CartesianRDD.scala ---
@@ -71,9 +72,92 @@ class CartesianRDD[T: ClassTag, U: ClassTag](
}
Github user ConeyLiu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17936#discussion_r117432268
--- Diff: core/src/main/scala/org/apache/spark/rdd/CartesianRDD.scala ---
@@ -71,9 +72,92 @@ class CartesianRDD[T: ClassTag, U: ClassTag](
}
Github user bOOm-X commented on the issue:
https://github.com/apache/spark/pull/18004
@markhamstra, @vanzin: Can I have a review please ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18016
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18016
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77087/
Test PASSed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18030
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ConeyLiu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17936#discussion_r117427240
--- Diff: core/src/main/scala/org/apache/spark/rdd/CartesianRDD.scala ---
@@ -71,9 +72,92 @@ class CartesianRDD[T: ClassTag, U: ClassTag](
}
Github user liu-zhaokun commented on the issue:
https://github.com/apache/spark/pull/17992
@srowen
Hi,do you know why this PR can't pass the test? I don't think it's my
problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17880
**[Test build #3731 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3731/testReport)**
for PR 17880 at commit
GitHub user d0evi1 opened a pull request:
https://github.com/apache/spark/pull/18034
[SPARK-20797][MLLIB]fix LocalLDAModel.save() bug.
## What changes were proposed in this pull request?
LocalLDAModel's model save function has a bug:
please see:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18033
**[Test build #77091 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77091/testReport)**
for PR 18033 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17770
**[Test build #77088 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77088/testReport)**
for PR 17770 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17770
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77088/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18033
**[Test build #77092 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77092/testReport)**
for PR 18033 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17770
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18011
**[Test build #77093 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77093/testReport)**
for PR 18011 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17923
**[Test build #3737 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3737/testReport)**
for PR 17923 at commit
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18031#discussion_r117461089
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -193,8 +219,27 @@ private[spark] object HighlyCompressedMapStatus {
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/17936
Yeah, I think I can do the performance comparison.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18033
**[Test build #77091 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77091/testReport)**
for PR 18033 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18033
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77091/
Test FAILed.
---
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17936
@jerryshao Yeah, the reason I mentioned caching is to know how much
re-computing RDD costs in the performance. It seems to me that if re-computing
is much more costing than transferring the data,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18030
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18030
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77090/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18032
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/18030
LGTM - merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17880
**[Test build #3731 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3731/testReport)**
for PR 17880 at commit
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18031#discussion_r117460170
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -193,8 +219,27 @@ private[spark] object HighlyCompressedMapStatus {
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17936
I see. I think at least we should make this cache mechanism controllable by
flag. I'm guessing in some HPC clusters or single node cluster this problem is
not so severe.
---
If your project is
Github user ConeyLiu commented on the issue:
https://github.com/apache/spark/pull/17936
I did not directly test this situation. But I have test the this pr
compared with latest `ALS`(after merge #17742 ). In `ALS`, the both RDDs are
cached, and also grouped the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18016
**[Test build #77087 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77087/testReport)**
for PR 18016 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17936
Seems it should be still better than original cartesian, since it saves
re-computing RDD, re-transferring data?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18030
**[Test build #77090 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77090/testReport)**
for PR 18030 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18032
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77089/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18032
**[Test build #77089 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77089/testReport)**
for PR 18032 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18023
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18023
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77108/
Test PASSed.
---
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17996
right, I reviewed them
-
[this](https://github.com/apache/spark/pull/17996/files#diff-a9770b923a4959616bc2126d4afd61eaR35)
in ML could also affect R
-
GitHub user lys0716 opened a pull request:
https://github.com/apache/spark/pull/18038
[MINOR][SPARKRSQL]Remove unnecessary comment in SqlBase.g4
## What changes were proposed in this pull request?
The issue(https://github.com/antlr/antlr4/issues/781) in the comment is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16697
**[Test build #77111 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77111/testReport)**
for PR 16697 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16697
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16697
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77111/
Test PASSed.
---
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17723#discussion_r117595679
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopAccessManager.scala
---
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
Sorry I've been out traveling -- I'll try to update this by tonight
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17967#discussion_r117602233
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/RFormula.scala
---
@@ -38,29 +38,35 @@ import org.apache.spark.sql.types._
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/18039
[SPARK-20751][SQL] Add cot test in MathExpressionsSuite
## What changes were proposed in this pull request?
Add cot test in MathExpressionsSuite as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18039
**[Test build #77113 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77113/testReport)**
for PR 18039 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17981
any more comment?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/12646
Jenkins is about to shut down, we can retest this later
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17978
I'd hold this for another 3-4 days just in case..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/12646
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16697
**[Test build #77111 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77111/testReport)**
for PR 16697 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17967
**[Test build #77110 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77110/testReport)**
for PR 17967 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12646
**[Test build #77112 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77112/testReport)**
for PR 12646 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17966
**[Test build #77114 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77114/testReport)**
for PR 17966 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17966
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17966
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/77114/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18023
**[Test build #77108 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77108/testReport)**
for PR 18023 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17967#discussion_r117602143
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/RFormula.scala
---
@@ -38,29 +38,35 @@ import org.apache.spark.sql.types._
Github user d0evi1 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18034#discussion_r117602669
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAModel.scala ---
@@ -468,7 +469,16 @@ object LocalLDAModel extends Loader[LocalLDAModel]
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/16648#discussion_r117602817
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -145,11 +145,85 @@ class CodegenContext {
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/17967
@yanboliang Thanks for the review and suggestion. Makes lots of sense. I
made a new commit to address these.
---
If your project is set up for it, you can reply to this email and have your
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r117600840
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -510,6 +510,69 @@ public UTF8String trim() {
}
}
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r117601121
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -510,6 +510,69 @@ public UTF8String trim() {
}
}
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r117601355
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -2015,4 +2015,121 @@ class SQLQuerySuite extends QueryTest
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17967
**[Test build #77110 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77110/testReport)**
for PR 17967 at commit
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r117601249
--- Diff:
common/unsafe/src/test/java/org/apache/spark/unsafe/types/UTF8StringSuite.java
---
@@ -730,4 +726,62 @@ public void testToLong() throws
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r117601293
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
---
@@ -375,24 +374,61 @@ class
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r117600921
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -510,6 +510,69 @@ public UTF8String trim() {
}
}
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r117600707
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -510,6 +510,69 @@ public UTF8String trim() {
}
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17967
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
1 - 100 of 391 matches
Mail list logo