GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/16874
[SPARK-19509][SQL][branch-2.1]Fix a NPE problem in grouping sets when using
an empty column
## What changes were proposed in this pull request?
If a column of a table is all null values
Github user stanzhai closed the pull request at:
https://github.com/apache/spark/pull/16874
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/16953
[SPARK-19622][WebUI]Fix a http error in a paged table when using a `Go`
button to search.
## What changes were proposed in this pull request?
The search function of paged table is not
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/17529
[SPARK-20211][SQL]floor or ceil with a decimal that its `precision < scale`
should be supported
## What changes were proposed in this pull request?
`precision` in a decimal indicates
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/17529
cc @chenghao-intel
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/17099
Constant alias columns in INNER JOIN should not be folded by
FoldablePropagation rule
## What changes were proposed in this pull request?
This PR fixes the code in Optimizer phase where the
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/17099
@hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/17099
Thanks for @gatorsmile 's help.
`ConstantFolding` will affect other test cases in
`FoldablePropagationSuite`.
It's fine without adding `ConstantFolding`.
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/17099
ok
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/17099#discussion_r103848391
--- Diff: sql/core/src/test/resources/sql-tests/results/inner-join.sql.out
---
@@ -0,0 +1,68 @@
+-- Automatically generated by SQLQueryTestSuite
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/17131
[SPARK-19766][SQL][BRANCH-2.0] Constant alias columns in INNER JOIN should
not be folded by FoldablePropagation rule
This PR fix for branch-2.0
Refer #17099
@gatorsmile
You
Github user stanzhai closed the pull request at:
https://github.com/apache/spark/pull/17131
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/16617
[SPARK-19261][SQL]Support `ALTER TABLE table_name ADD COLUMNS(..)`
statement
## What changes were proposed in this pull request?
We should support `ALTER TABLE table_name ADD COLUMNS
Github user stanzhai closed the pull request at:
https://github.com/apache/spark/pull/16617
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/16617
Good job!
I will review your PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user stanzhai closed the pull request at:
https://github.com/apache/spark/pull/19301
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/22051
[SPARK-25064][WEBUI] Add killed tasks count info to WebUI
## What changes were proposed in this pull request?
Add missing killed tasks to WebUI.
Total tasks = Active + Failed
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/18544
[SPARK-21318][SQL]Improve exception message thrown by `lookupFunction`
## What changes were proposed in this pull request?
The function actually exists in current selected database, and
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
cc @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/18544#discussion_r202607295
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -129,14 +129,14 @@ private[sql] class HiveSessionCatalog
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
cc @gatorsmile changes in
`sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala`
has been reverted
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
It's not reasonable, `failFunctionLookup` throws `NoSuchFunctionException`.
The function actually exists in current selected database, we should throw
the exception which is due
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/21663
[SPARK-24680][Deploy]Support spark.executorEnv.JAVA_HOME in Standalone mode
## What changes were proposed in this pull request?
spark.executorEnv.JAVA_HOME does not take effect when a
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21623#discussion_r199062132
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -378,6 +378,14 @@ object SQLConf {
.booleanConf
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/21680
[SPARK-24704][WebUI] Fix the order of stages in the DAG graph
## What changes were proposed in this pull request?
Before:
![wx20180630-155537](https://user
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/21663
@jerryshao My Spark Application is built on top of JDK10, but the
standalone cluster manager is running with JDK8 which does not support JDK10.
Java 7 support has been removed since Spark
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/18544#discussion_r201579348
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -129,14 +129,14 @@ private[sql] class HiveSessionCatalog
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
cc @gatorsmile Addressed. Review this please. Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/7368
[SPARK-9010][Documentation]Improve the Spark Configuration document about
`spark.kryoserializer.buffer`
The meaning of spark.kryoserializer.buffer should be "Initial size of
Kryo's ser
Github user stanzhai closed the pull request at:
https://github.com/apache/spark/pull/7368
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/7393
[SPARK-9010][Documentation]Improve the Spark Configuration document about
`spark.kryoserializer.buffer`
The meaning of spark.kryoserializer.buffer should be "Initial size of
Kryo's ser
Github user stanzhai closed the pull request at:
https://github.com/apache/spark/pull/7393
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
The issue has been addressed a long time ago @cloud-fan @maropu
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
@cloud-fan
User's hive UDFs are registered in externalCatalog which not exists in
functionRegistry.
It will throws a NoSuchFunctionException when an exception is encoun
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/18544#discussion_r219468948
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
---
@@ -1440,6 +1441,8 @@ abstract class
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/18544#discussion_r219485843
--- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/UDFSuite.scala
---
@@ -193,4 +193,29 @@ class UDFSuite
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
Hi @gatorsmile , I've added some test cases, and passed on my machine.
---
-
To unsubscribe, e-mail: reviews-uns
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
fixed @gatorsmile . retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/19301
[SPARK-22084][SQL] Fix performance regression in aggregation strategy
## What changes were proposed in this pull request?
This PR fix a performance regression in aggregation strategy
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19301#discussion_r140155475
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/view.scala
---
@@ -38,7 +38,7 @@ import
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/19301
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala#L211
```scala
val aggregateExpressions
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/19301
@cenyuhai This is an optimize for physical plan, and your case can be
optimized.
```SQL
select dt,
geohash_of_latlng,
sum(mt_cnt),
sum(ele_cnt),
round(sum(mt_cnt
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/19301
@viirya The problem is already obvious, and the same aggregate expression
will be computed multi times. I will provide a benchmark result later
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/19301
@viirya
Benchmark code:
```scala
val N = 500L << 22
val benchmark = new Benchmark("agg", N)
val expressions = (0 until 50).map(i =&g
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/19301#discussion_r140699522
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/interfaces.scala
---
@@ -72,11 +74,19 @@ object
GitHub user stanzhai opened a pull request:
https://github.com/apache/spark/pull/18986
[SPARK-21774][SQL] The rule PromoteStrings should cast a string to double
type when compare with a int
## What changes were proposed in this pull request?
The rule PromoteStrings should
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18986
In MySQL conversion of values from one string type to numeric, will be
compared as floating-point (real) numbers.
[](https://dev.mysql.com/doc/refman/5.7/en/type-conversion.html
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18986
@DonnyZone @gatorsmile @cloud-fan PostgreSQL will throw an error when
comparing a string to a int.
```
postgres=# select * from tb;
a | b
--+---
0.1 | 1
a
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18986
@gatorsmile @DonnyZone When comparing a string to a int in Hive, it will
cast string type to double.
```
hive> select * from tb;
0 0
0.1 0
true
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/17529
cc @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/18544
@gatorsmile
Some test cases have been added.
Thanks for reviewing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/18244#discussion_r121053165
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala ---
@@ -126,7 +126,15 @@ final class Decimal extends Ordered[Decimal] with
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/18244#discussion_r121058323
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala ---
@@ -126,7 +126,15 @@ final class Decimal extends Ordered[Decimal] with
Github user stanzhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/18244#discussion_r121060627
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala ---
@@ -126,7 +126,15 @@ final class Decimal extends Ordered[Decimal] with
Github user stanzhai closed the pull request at:
https://github.com/apache/spark/pull/17529
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user stanzhai commented on the issue:
https://github.com/apache/spark/pull/10991
We've just upgraded our Spark cluster from 1.6.x to 2.x, I found that the
REST APIs from Spark MasterUI is unavailable.
It's important for us to use the REST APIs to m
56 matches
Mail list logo