Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21939
got it. Thank you!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21939
@shaneknapp what was the version of pyarrow in that build? 0.8 or 0.10?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21939
@BryanCutler So, for this upgrade, even the JVM side dependency is 0.10,
pyspark can work with any version between pyarrow 0.8 to 0.10 without problem
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/22003
@dongjoon-hyun no problem. Thank you!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Repository: spark
Updated Branches:
refs/heads/master 51e2b38d9 -> 278984d5a
[SPARK-25019][BUILD] Fix orc dependency to use the same exclusion rules
## What changes were proposed in this pull request?
During upgrading Apache ORC to 1.5.2
([SPARK-24576](https://issues.apache.org/jira/browse/S
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/22003
lgtm. Merging to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22003#discussion_r207986831
--- Diff: sql/core/pom.xml ---
@@ -90,39 +90,11 @@
org.apache.orc
orc-core
${orc.classifier
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22003#discussion_r207962501
--- Diff: sql/core/pom.xml ---
@@ -90,39 +90,11 @@
org.apache.orc
orc-core
${orc.classifier
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/22003#discussion_r207888608
--- Diff: sql/core/pom.xml ---
@@ -90,39 +90,11 @@
org.apache.orc
orc-core
${orc.classifier
Repository: spark
Updated Branches:
refs/heads/master d4a277f0c -> fc21f192a
[SPARK-24895] Remove spotbugs plugin
## What changes were proposed in this pull request?
Spotbugs maven plugin was a recently added plugin before 2.4.0 snapshot
artifacts were broken. To ensure it does not affect t
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21865
lgtm. I am merging this PR to master branch. Then, I will kick off
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-master-maven-snapshots
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/21865
cc @HyukjinKwon @kiszk
I will merge this PR once it passes the test.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Author: yhuai
Date: Wed Mar 7 17:53:32 2018
New Revision: 25568
Log:
Update KEYS for Sameer Agarwal
Modified:
release/spark/KEYS
Modified: release/spark/KEYS
==
--- release/spark/KEYS (original)
+++ release/spark
Author: yhuai
Date: Wed Feb 28 07:25:53 2018
New Revision: 25324
Log:
Releasing Apache Spark 2.3.0
Added:
release/spark/spark-2.3.0/
- copied from r25323, dev/spark/v2.3.0-rc5-bin/
Removed:
dev/spark/v2.3.0-rc5-bin
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20473#discussion_r16362
--- Diff: python/run-tests.py ---
@@ -151,6 +151,38 @@ def parse_opts():
return opts
+def _check_dependencies(python_exec
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165449847
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -199,7 +200,7 @@ object ExtractFiltersAndInnerJoins
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20473#discussion_r165445947
--- Diff: python/run-tests.py ---
@@ -151,6 +151,38 @@ def parse_opts():
return opts
+def _check_dependencies(python_exec
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20473#discussion_r165445232
--- Diff: python/run-tests.py ---
@@ -151,6 +151,38 @@ def parse_opts():
return opts
+def _check_dependencies(python_exec
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/20465
So, jenkins jobs run those tests with python3? If so, I feel better because
those tests are not completely skipped in Jenkins. If it is hard to make them
run with python 2. Letâs have a log to
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/20465
@felixcheung jenkins is actually skipping those tests (see the failure of
this pr). It makes sense to provide a way to allow developers to not run those
tests. But, I'd prefer that we run those
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165253818
--- Diff: python/pyspark/sql/tests.py ---
@@ -4353,6 +4347,446 @@ def test_unsupported_types(self):
df.groupby('id').apply(
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165253514
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -199,7 +200,7 @@ object ExtractFiltersAndInnerJoins
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19872#discussion_r165220142
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -199,7 +200,7 @@ object ExtractFiltersAndInnerJoins
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/20037#discussion_r163463718
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -1271,7 +1271,7 @@ private[spark] object SparkSubmitUtils
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/20110
Thank you! Let's also check the build result to make sure
`pyspark.streaming.tests.FlumePollingStreamTests` is indeed triggered (I hit
this issue while running this
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19535#discussion_r159019845
--- Diff: python/pyspark/streaming/flume.py ---
@@ -54,8 +54,13 @@ def createStream(ssc, hostname, port,
:param bodyDecoder: A function used to
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19535#discussion_r159013024
--- Diff: python/pyspark/streaming/flume.py ---
@@ -54,8 +54,13 @@ def createStream(ssc, hostname, port,
:param bodyDecoder: A function used to
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/5604#discussion_r157933488
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -0,0 +1,340 @@
+/*
+ * Licensed to the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
Thank you :)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
I am not really worried about this particular change. It's already merged
and it seems a small and safe change. I am not planning to revert it.
But, in general, let's avoid of mergi
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19448
@HyukjinKwon branch-2.2 is in a maintenance branch, I am not sure it is
appropriate to merge this change to branch-2.2 since it is not really a bug
fix. If the doc is not accurate, we should fix the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19149
Can we add a test?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19080#discussion_r136214689
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
---
@@ -30,18 +30,43 @@ import
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/19080
Have a question after reading the new approach. Let's say that we have a
join like `T1 JOIN T2 on T1.a = T2.a`. Also `T1` is hash partitioned by the
value of `T1.a` and it has 10 partitions, an
Add the news about spark-summit-eu-2017 agenda
Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/35eb1471
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/35eb1471
Diff: http://git-wip-us.apache.or
Repository: spark-website
Updated Branches:
refs/heads/asf-site cca972e7f -> 35eb14717
http://git-wip-us.apache.org/repos/asf/spark-website/blob/35eb1471/site/releases/spark-release-1-3-0.html
--
diff --git a/site/releases/spar
http://git-wip-us.apache.org/repos/asf/spark-website/blob/35eb1471/site/news/spark-accepted-into-apache-incubator.html
--
diff --git a/site/news/spark-accepted-into-apache-incubator.html
b/site/news/spark-accepted-into-apache-incu
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18944
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Repository: spark
Updated Branches:
refs/heads/branch-2.2 76ee41fd7 -> a585c870a
[SPARK-2][TEST][2.2] Fix the test failure of describe.sql
## What changes were proposed in this pull request?
Test failed in `describe.sql`.
We need to fix the related bug introduced in
(https://github.com/a
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18316
Thanks! I have merged this pr to branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18316
thanks! merging to branch-2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18316
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18064
My suggestion was about getting changes on the interfaces of
ExecutedCommandExec and SaveIntoDataSourceCommand to separate prs. It will help
code review (both speed and quality).
---
If your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18148
@vanzin Seems merging to branch-2.2 was an accident? Since it is not really
a bug fix, should we revert it from branch-2.2 and just keep it in the master?
---
If your project is set up for it, you
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18064
I just case across this pr. I have one general feedback. It will be great
if we can make a pr have a single purpose. This pr contains different kinds of
changes in order to fix the UI. If refactoring
Repository: spark
Updated Branches:
refs/heads/branch-2.2 6c628e75e -> b560c975b
Revert "[SPARK-20946][SQL] simplify the config setting logic in
SparkSession.getOrCreate"
This reverts commit e11d90bf8deb553fd41b8837e3856c11486c2503.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Repository: spark
Updated Branches:
refs/heads/master 2a780ac7f -> 0eb1fc6cd
Revert "[SPARK-20946][SQL] simplify the config setting logic in
SparkSession.getOrCreate"
This reverts commit e11d90bf8deb553fd41b8837e3856c11486c2503.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Com
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/18172
Reverting this because it breaks repl tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/17617#discussion_r119938185
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -143,14 +144,29 @@ class SparkHadoopUtil extends Logging
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17763
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17666
I have reverted this change from both master and branch-2.2. I have
reopened the jira.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Repository: spark
Updated Branches:
refs/heads/branch-2.2 9e8d23b3a -> d191b962d
Revert "[SPARK-20311][SQL] Support aliases for table value functions"
This reverts commit 714811d0b5bcb5d47c39782ff74f898d276ecc59.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-w
Repository: spark
Updated Branches:
refs/heads/master ac1ab6b9d -> f79aa285c
Revert "[SPARK-20311][SQL] Support aliases for table value functions"
This reverts commit 714811d0b5bcb5d47c39782ff74f898d276ecc59.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-u
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17666
I am going to revert this PR from master and branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17666
@maropu Sorry. I think this PR introduces a regression.
```
scala> spark.sql("select * from range(1, 10) cross join range(1,
10)").explain
==
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
i see. I think
https://github.com/apache/spark/pull/17905/commits/d4c1a9db25ee7386f7b12e4dabb54210a9892510
is good. How about we get it checked in first (after jenkins passes)?
---
If your project
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
@falaki's PR did not actually trigger that test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
@felixcheung you are right. That is the problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
I do not think https://github.com/apache/spark/pull/17649 caused the
problem. I saw failures without that internally.
---
If your project is set up for it, you can reply to this email and have your
Repository: spark
Updated Branches:
refs/heads/branch-2.2 23681e9ca -> 4179ffc03
[SPARK-20661][SPARKR][TEST] SparkR tableNames() test fails
## What changes were proposed in this pull request?
Cleaning existing temp tables before running tableNames tests
## How was this patch tested?
SparkR Un
Repository: spark
Updated Branches:
refs/heads/master 829cd7b8b -> 2abfee18b
[SPARK-20661][SPARKR][TEST] SparkR tableNames() test fails
## What changes were proposed in this pull request?
Cleaning existing temp tables before running tableNames tests
## How was this patch tested?
SparkR Unit t
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
Thanks @falaki. Merging to master and branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
Seems 2.2 build is fine. But, I'd like to get this merged in branch-2.2
since this test will fail if any previous tests leak tables.
---
If your project is set up for it, you can reply to this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17903
@felixcheung fyi. I think the main problem of this test is that it will be
broken if tests executed before this one leak any table. I think this change
makes sense. I will merge it once it passes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17892
@felixcheung Seems master build is broken because R tests are broken
(https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-sbt-hadoop-2.7/2844/console).
I am not sure
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17746
@dbtsai Thanks for the explanation and the context :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17746
Can I ask how we decided merging this dependency change after the cut of
the release branch (especially this change affects user code)?
---
If your project is set up for it, you can reply to this
Repository: spark
Updated Branches:
refs/heads/branch-2.2 32c5a105e -> e929cd767
[SPARK-20358][CORE] Executors failing stage on interrupted exception thrown by
cancelled tasks
## What changes were proposed in this pull request?
This was a regression introduced by my earlier PR here:
https:/
Repository: spark
Updated Branches:
refs/heads/master c5a31d160 -> b2ebadfd5
[SPARK-20358][CORE] Executors failing stage on interrupted exception thrown by
cancelled tasks
## What changes were proposed in this pull request?
This was a regression introduced by my earlier PR here:
https://git
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17659
lgtm. Merging to master and branch-2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Repository: spark
Updated Branches:
refs/heads/master 4000f128b -> 5142e5d4e
[SPARK-20217][CORE] Executor should not fail stage if killed task throws
non-interrupted exception
## What changes were proposed in this pull request?
If tasks throw non-interrupted exceptions on kill (e.g.
java.ni
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17531
Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17531
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17423
got it. Thanks :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17423
@felixcheung `SparkContext.getOrCreate` is the preferred way to create a
SparkContext. So, even we have check, it is still better to use `getOrCreate`.
---
If your project is set up for it, you can
Repository: spark
Updated Branches:
refs/heads/master fcb68e0f5 -> dd9049e04
[SPARK-19620][SQL] Fix incorrect exchange coordinator id in the physical plan
## What changes were proposed in this pull request?
When adaptive execution is enabled, an exchange coordinator is used in the
Exchange op
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16952
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17156
merged to branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.1 da04d45c2 -> 664c9795c
[SPARK-19816][SQL][TESTS] Fix an issue that DataFrameCallbackSuite doesn't
recover the log level
## What changes were proposed in this pull request?
"DataFrameCallbackSuite.execute callback functions when a Data
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17156
Let's also merge this to branch-2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16917
Let's use a meaningful title in future :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
cool. It has been merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
lso log the start of a test. So, if a test is hanging, we
can tell which test file is running.
## How was this patch tested?
This is a change for python tests.
Author: Yin Huai
Closes #16935 from yhuai/SPARK-19604.
(cherry picked from commit f6c3bba22501ee7753d85c6e51ffe851d43869c1)
Signed-off
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
Seems I cannot merge now... Will try again later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
ok. Nothing new to add. I will merge this to master and branch-2.1 (in case
we want to debug any python test hanging issue in branch-2.1).
---
If your project is set up for it, you can reply to this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16935
Let's not merge it right now. I may need to log more.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/16935
[SPARK-19604] [TESTS] Log the start of every Python test
## What changes were proposed in this pull request?
Right now, we only have info level log after we finish the tests of a
Python
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16894
thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16067
@gatorsmile can we also add it in branch-2.0? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
location of those downloaded jars when
`spark.sql.hive.metastore.jars` is set to `maven`.
## How was this patch tested?
jenkins
Author: Yin Huai
Closes #16649 from yhuai/SPARK-19295.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spar
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16649
Cool I am merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16645
My main concern of this pr is that if people will think it is recommended
to add new batches to force those rules running in a certain ordering. For
these resolution rules, we can also use conditions
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/16649
[SPARK-19295] [SQL] IsolatedClientLoader's downloadVersion should log the
location of downloaded metastore client jars
## What changes were proposed in this pull request?
This will hel
uai
Closes #16628 from yhuai/known_translations.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/0c923185
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/0c923185
Diff: http://git-wip-us.apache.org/repos/asf/spark/d
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16613
nvm. After second thought, the feature flag does not really buy us
anything. We just store the original view definition and the column mapping in
the metastore. So, I think it is fine to just do the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16628
I am merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14204
ok I agree. Originally, I thought it will be helpful to figure out the
worker that an executor belongs to.
But, if it does not provide very useful information. I am fine to drop it.
---
If
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16628
done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16613
is there a feature flag that is used to determine if we use this new
approach? I feel it will be good to have an internal feature flag to determine
the code path. So, if there is something wrong that
1 - 100 of 6156 matches
Mail list logo