Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r77009503
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -33,7 +33,8 @@ import
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r77008695
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -255,3 +287,43 @@ case class AlterViewAsCommand(
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14871
thanks for finding this and the quick fix @JoshRosen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r77007610
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -255,3 +287,43 @@ case class AlterViewAsCommand(
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14864
@cloud-fan : I have taken care of that case in the PR (see L175 to L185).
The sort ordering will only be used when all the buckets have single file. In
subsequent PRs I plan to extend this so
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14892
ok, sounds reasonable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r77005603
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -0,0 +1,395 @@
+/*
+ * Licensed to the Apache Software
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14892
Looks like it's about 7 minutes 32 seconds out of 1 hour 49 minutes to
build/test YARN, as of say
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14864
I'm not sure it's safe to do so. A bucket may have more than one files(this
can happen if we append data into bucketed table), so for each file, it's
sorted, but for the whole bucket, it is NOT,
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14892
guess I missed mesos being moved to its own module just saw them changing
to build that way also. So I guess nevermind. I am curious how long it takes
though?
If we see problems with it we
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14876
yea, pushing down partial aggregate below exchange is a good idea, but I
think it's out of the scope of SPARK-12978, which is aim to remove unnecessary
partial aggregate right?
---
If your
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/14876
yea, I thinks so. I like the approach in this pr.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14893
**[Test build #64720 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64720/consoleFull)**
for PR 14893 at commit
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14892
does this make a big time difference? Otherwise lots of things in core
affect yarn. mesos and standalone are always built since not different modules,
seems like a easy sanity test to leave it
Github user wzhfy commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r77002553
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -383,21 +383,36 @@ private[spark] class HiveExternalCatalog(conf:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14897
**[Test build #64719 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64719/consoleFull)**
for PR 14897 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14897
cc @yhuai @liancheng @clockfly
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/14897
[SPARK-17338][SQL] add global temp view
## What changes were proposed in this pull request?
global temporary view is a cross-session temporary view, which means it's
shared among all
Github user WeichenXu123 closed the pull request at:
https://github.com/apache/spark/pull/14628
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/14628
because KMeans algo is being optimized by another task I close this PR for
now and when that one merged I'll check for whether this need to be optimized.
---
If your project is set up for it,
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14452#discussion_r76997680
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/subquery/CommonSubquery.scala
---
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14452
@viirya I am still trying to figure out the added value of this PR. Here is
my main question/concern: Spark pipelines operators in a single stage. Results
are only materialized at the end of a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14452
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14452
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64718/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14452
**[Test build #64718 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64718/consoleFull)**
for PR 14452 at commit
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14886
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14887#discussion_r76989743
--- Diff:
common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java
---
@@ -25,6 +25,8 @@
import
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14887
Note the only reason the local-dirs was left in there was for backwards
compatibility.
Make sure your recovery path is set in your yarn configuration
---
If your project is set up for it,
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14887
so we have had this come up multiple times now and really the shuffle
service should be using the yarn recovery specific disk. We made this change
under SPARK-14963. That disk is supposed to be
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/14568
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14710#discussion_r76984890
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -532,39 +547,53 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14896
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64717/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14896
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14896
**[Test build #64717 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64717/consoleFull)**
for PR 14896 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/13802
yea, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/13802
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14568
@tedyu could you close this one?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/13802
@maropu could you close this one? It is not that relevant anymore. Thanks
for working on it though!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14895
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14895
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64716/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14895
**[Test build #64716 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64716/consoleFull)**
for PR 14895 at commit
Github user bookling commented on the issue:
https://github.com/apache/spark/pull/14880
In the labelPropagation of graphx lib, node is initialized with a unique
label and at every step each node adopts the label that most of its
neighbors currently have, but ignore the label it
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14452
**[Test build #64718 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64718/consoleFull)**
for PR 14452 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14894
OK, but the change isn't actually related to SPARK-8368, so I think you can
remove that link in the title and the link that was created in the JIRA.
See `Utils.deleteRecursively`. You don't
Github user tone-zhang commented on the issue:
https://github.com/apache/spark/pull/14894
@srowen Thanks for the comments.
I wrote SPARK-8368 here just because the UT case name is "SPARK-8368:
includes jars passed in through --jars".
For the Utils method you mentioned, could
Github user chenghao-intel commented on the issue:
https://github.com/apache/spark/pull/14366
Ping @rxin , seems the upstream is not updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14893
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64715/
Test FAILed.
---
Github user yucai commented on a diff in the pull request:
https://github.com/apache/spark/pull/10225#discussion_r76972643
--- Diff:
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala ---
@@ -50,35 +50,98 @@ private[spark] class DiskBlockManager(conf: SparkConf,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14893
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/9571#discussion_r76972524
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -664,6 +707,116 @@ private[history] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14893
**[Test build #64715 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64715/consoleFull)**
for PR 14893 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/14876
On the other hand, when caching the already-partitioned input table, we
cannot push-down them;
```
(0 to 1000).map(x => (x % 2, x.toString)).toDF("a",
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14892
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14892
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64710/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14892
**[Test build #64710 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64710/consoleFull)**
for PR 14892 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14826
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64714/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14826
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14826
**[Test build #64714 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64714/consoleFull)**
for PR 14826 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/14876
I found that we need to push-down partial aggregation below exchange
operators instead of merging them? For example, in the spark v2.0 branch,
```
(0 to 1000).map(x => (x % 2,
Github user Stibbons commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r76968414
--- Diff: python/pep8rc ---
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user Stibbons commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r76967915
--- Diff: python/.editorconfig ---
@@ -0,0 +1,30 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14885
**[Test build #3239 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3239/consoleFull)**
for PR 14885 at commit
Github user Stibbons commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r76967736
--- Diff: dev/py-validate.sh ---
@@ -0,0 +1,110 @@
+#!/usr/bin/env bash
--- End diff --
My point of view:
- don't enforce it right
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14762
Ping @sumansomasundar
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76967533
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -383,21 +383,36 @@ private[spark] class
Github user zhaoyunjiong commented on the issue:
https://github.com/apache/spark/pull/14887
@SaintBacchus Please check below logs:
2016-08-30 10:16:24,982 INFO
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: Disk(s)
failed. 3/12 local-dirs turned bad:
Github user Stibbons commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r76967333
--- Diff: dev/isort.cfg ---
@@ -1,9 +1,9 @@
# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14894
How would this relate to SPARK-8368?
We already have a Utils method for deleting recursively already; please use
that.
Is this the only path that leaves behind spark-warehouse?
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76967205
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -33,7 +33,8 @@ import
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14896
**[Test build #64717 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64717/consoleFull)**
for PR 14896 at commit
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/14896
[SPARK-17332] [CORE] Make Java Loggers static members
## What changes were proposed in this pull request?
Make all Java Loggers static members
## How was this patch tested?
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14895
LGTM - pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user chenghao-intel commented on the issue:
https://github.com/apache/spark/pull/12646
I like this PR since it's part of SQL standard, but there are also another
Jira, https://issues.apache.org/jira/browse/SPARK-17299 , maybe we can do that
in a follow up PR to fix. Can you
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76966709
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -130,6 +130,7 @@ case class CatalogTable(
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/10896
@yhuai Thanks your comment and I agree with you. We'll keep the discussion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14712#discussion_r76966560
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -33,7 +33,8 @@ import
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14895
**[Test build #64716 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64716/consoleFull)**
for PR 14895 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76966164
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2677,4 +2678,107 @@ class SQLQuerySuite extends QueryTest with
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/14895
[SPARK-17331] [CORE] [MLLIB] Avoid allocating 0-length arrays
## What changes were proposed in this pull request?
Avoid allocating some 0-length arrays, esp. in UTF8String, and by using
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76966028
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2677,4 +2678,107 @@ class SQLQuerySuite extends QueryTest with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14712
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64713/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14712
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14712
**[Test build #64713 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64713/consoleFull)**
for PR 14712 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76965552
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -476,6 +476,61 @@ public UTF8String trim() {
}
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76965110
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -1789,6 +1803,133 @@ class SQLQuerySuite extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14894
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user tone-zhang opened a pull request:
https://github.com/apache/spark/pull/14894
[SPARK-17330] [SPARK UT] Fix the failure Spark UT (SPARK-8368) case
## What changes were proposed in this pull request?
Check the database warehouse used in Spark UT, and remove the
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76963822
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -431,56 +432,233 @@ case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14893
**[Test build #64715 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64715/consoleFull)**
for PR 14893 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76963598
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -431,56 +432,233 @@ case class
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76963573
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -431,56 +432,233 @@ case class
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76963406
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -431,56 +432,233 @@ case class
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14893
cc @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/14893
[SPARK-17180][SPARK-17309][SPARK-17323][SQL][2.0] create AlterViewAsCommand
to handle ALTER VIEW AS
## What changes were proposed in this pull request?
Currently we use
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76962244
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -501,6 +578,38 @@ public UTF8String trimRight() {
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76962088
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -488,6 +543,28 @@ public UTF8String trimLeft() {
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76961869
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -488,6 +543,28 @@ public UTF8String trimLeft() {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r76961681
--- Diff: python/pep8rc ---
@@ -0,0 +1,21 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r76961521
--- Diff: python/.editorconfig ---
@@ -0,0 +1,30 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/12646#discussion_r76961503
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/types/UTF8String.java ---
@@ -488,6 +543,28 @@ public UTF8String trimLeft() {
401 - 500 of 631 matches
Mail list logo