Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19993
@hhbyyh thanks for the review. I see that for some classes there are
ongoing PRs. Thus I cannot change them now in order to have a common place and
a common test. Should I wait for those PRs to be
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19993#discussion_r157414340
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Bucketizer.scala
---
@@ -140,10 +140,10 @@ final class Bucketizer @Since("1.4.0"
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19811#discussion_r157415970
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -112,15 +112,14 @@ case class Like(left
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19811#discussion_r157518488
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -112,15 +112,14 @@ case class Like(left
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19811
LGTM too, thanks @kiszk and @bdrillard! This is a very important PR IMHO
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157691450
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157749463
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157749627
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157787266
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -99,6 +99,17 @@ object TypeCoercion
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157794266
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -158,11 +169,6 @@ object TypeCoercion
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/20023
[SPARK-22036][SQL] Decimal multiplication with high precision/scale often
returns NULL
## What changes were proposed in this pull request?
When there is an operation between Decimals and
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20016#discussion_r157809630
--- Diff:
examples/src/main/scala/org/apache/spark/examples/DFSReadWriteTest.scala ---
@@ -49,12 +49,10 @@ object DFSReadWriteTest
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20016#discussion_r157810171
--- Diff: examples/src/main/scala/org/apache/spark/examples/LocalALS.scala
---
@@ -95,7 +95,7 @@ object LocalALS {
def showWarning
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20021#discussion_r157812984
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -930,6 +930,16 @@ class CodegenContext
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19340
hi @Kevin-Ferret , thanks for looking at this. Yes, you are right, I have
not changed the method for updating the centroids. The current methods seems to
me the most widely adopted for cosine
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20021#discussion_r157820471
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -930,6 +930,18 @@ class CodegenContext
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20021#discussion_r157822361
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -930,6 +930,18 @@ class CodegenContext
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19993
@hhbyyh I created a common infrastructure. Please let me know if I have to
modify it. Meanwhile, I'd like to discuss where to put the common UTs: do you
have any specific idea about the
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19993
@hhbyyh thanks for the comments. I already fixed all your comments, but I
am waiting to push for the UT. Honestly I think that `checkParam` is not the
best place. Checking the exception requires
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20021#discussion_r157962547
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -930,6 +930,18 @@ class CodegenContext
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r157967667
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -158,11 +169,6 @@ object TypeCoercion
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@cloud-fan @dongjoon-hyun @gatorsmile @rxin @viirya I saw you worked on
this files. Maybe you can help reviewing the PR. For further details about the
reasons of this PR, please refer to the e
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@cloud-fan yes, Hive changed and most important at the moment we are not
compliant with SQL standard. So currently Spark is returning results which are
different from Hive and not compliant with
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20021
Honestly, I liked very much doing the test only for testing and not
throwing an exception in production. IMHO it is an overkill to throw an
exception in production and in the remote case that we
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19929
kindly ping @cloud-fan @gatorsmile @HyukjinKwon @zero323
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19993
thanks @hhbyyh, I updated the PR according to your suggestion and previous
comments.
---
-
To unsubscribe, e-mail: reviews
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
thanks for looking at this @hvanhovell. The reasons why I didn't introduce
a configuration variable for this behavior are:
1. As far as I know, currently there is no way to read rel
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@hvanhovell, as far as 1 is regarded, I was referring to [this
comment](https://github.com/apache/spark/pull/19449#pullrequestreview-67789784)
and [this PR](https://github.com/apache/spark/pull
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158157137
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -1526,15 +1526,15 @@ class SQLQuerySuite extends QueryTest with
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19993#discussion_r158158375
--- Diff: mllib/src/test/scala/org/apache/spark/ml/param/ParamsSuite.scala
---
@@ -430,4 +433,49 @@ object ParamsSuite extends SparkFunSuite
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
Thank you for your review @dongjoon-hyun. I think what we can do is add
more test to the whitelist in `HiveCompatibilitySuite`, updating them according
to HIVE-15331. Were you thinking to this or
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158224551
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158225279
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala
---
@@ -243,17 +248,43 @@ object DecimalPrecision
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158225412
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158225505
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158225632
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158225832
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158226546
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@gatorsmile, please refer to the [e-mail to the dev mail
list](https://mail-archives.apache.org/mod_mbox/spark-dev/201712.mbox/%3CCAEorWNAJ4TxJR9NBcgSFMD_VxTg8qVxusjP%2BAJP-x%2BJV9zH-yA
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20021
LGTM too, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19940
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r158242030
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -136,10 +137,54 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19940
the test errors are unrelated to this change. Any other comments @cloud-fan
@kiszk @viirya ?
---
-
To unsubscribe, e-mail
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19940
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19940#discussion_r158336014
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1193,7 +1196,8 @@ case class
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20043#discussion_r158361160
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/nullExpressions.scala
---
@@ -321,7 +322,12 @@ case class IsNull(child
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20043#discussion_r158361417
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -56,7 +56,36 @@ import
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19993
thanks @hhbyyh, I removed the 2 too strict checks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@gatorsmile I answered to your comments about DB2 in the e-mail.
@cloud-fan that would help, but not solve the problem. It would just make
the problem being generated by bigger numbers
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19929
@gatorsmile I added the test, but I didn't get what needs to be updated in
`registerPython`. May you explain me please? T
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19929#discussion_r158539355
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/python/PythonUDF.scala
---
@@ -29,9 +29,12 @@ case class PythonUDF(
func
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19929
thank you @cloud-fan, changed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19929
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19929#discussion_r158579661
--- Diff: python/pyspark/sql/tests.py ---
@@ -434,6 +434,16 @@ def test_udf_with_array_type(self):
self.assertEqual(list(range(3)), l1
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19929#discussion_r158579754
--- Diff: python/pyspark/sql/functions.py ---
@@ -2075,9 +2075,10 @@ class PandasUDFType(object):
def udf(f=None, returnType=StringType
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@gatorsmile that is scenario 3. I will explain you why and after I will do
and errata corrige of the summary I did in my last e-mail, because I made a
mistake about how DB2 computes the result
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
Thanks for your analysis @gatorsmile. Actually the rule you specified for
Oracle is what it uses when casting, rather then when doing arithmetic
operations. Yes DB2 has rather different rules to
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
Thanks @gatorsmile. Then should I create a follow up PR for #20008 in order
to cover the cases 2 and 3 before going on with this PR or can we go on with
this PR and the test cases added in this PR
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19929
@gatorsmile, yes, the reason why seed doesn't work is in the way Python
UDFs are executed, i.e. a new python process is created for each partition to
evaluate the Python UDF. Thus the seed i
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19929
@gatorsmile done, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/20084
[SPARK-22904][SQL] Add tests for decimal operations and string casts
## What changes were proposed in this pull request?
Test coverage for arithmetic operations leading to:
1
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@gatorsmile created #20084 for adding tests. Thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19993
@MLnick @viirya @WeichenXu123 any further comments on this?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20084
cc @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158843869
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -158,11 +213,8 @@ object TypeCoercion
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158844162
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -389,6 +389,25 @@ class TypeCoercionSuite
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158844322
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -389,6 +389,25 @@ class TypeCoercionSuite
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158844437
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -389,6 +389,25 @@ class TypeCoercionSuite
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158844477
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -469,12 +488,21 @@ class TypeCoercionSuite
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158844501
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -469,12 +488,21 @@ class TypeCoercionSuite
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r158844539
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -469,12 +488,21 @@ class TypeCoercionSuite
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20010
@bdrillard yes, after addressing my last comments I think we can build
this. My only concern is that we are not covering the case in which there are
nested complex structures in the wider case
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20029
What it seems is never closed by your analysis is the client used to
interact with the metastore. This might be a problem which we are not aware of
in normal SQL applications, since we have only
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
@cloud-fan thanks for your feedback. Honestly I don't have a string opinion
on how we should behave in that case. That's why I wanted some feedbacks from
the community on that point, whi
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20029
> The hiveClient created for the resourceLoader is only used to addJar,
which is, in turn, to add Jar to the shared IsolatedClientLoader. Then we can
just use the shared hive client for t
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20010
@gczsjdy @bdrillard the test errors are valid. In some cases an exception
is expected to be thrown, but it isn't, due to the fix. So they should be
updated accord
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19993
@jkbradley maybe you can also help reviewing this, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19492
kindly ping @viirya @HyukjinKwon
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r159119696
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -148,6 +160,61 @@ object TypeCoercion
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
thanks @gatorsmile, do you have any suggestion for the conf name?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19224
yes, next week I will rebase it on top of current master. Thanks @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20174#discussion_r160114031
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
---
@@ -102,10 +102,12 @@ case class HashAggregateExec
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20010#discussion_r160115072
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -462,27 +507,139 @@ class TypeCoercionSuite
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19993#discussion_r160143590
--- Diff: mllib/src/main/scala/org/apache/spark/ml/param/params.scala ---
@@ -249,6 +250,27 @@ object ParamValidators {
def arrayLengthGt[T
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r160147366
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -117,6 +117,7 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19492
any more comments on this @viirya @HyukjinKwon?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19224
@gatorsmile I think now it is ready for review, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/20189
[SPARK-22975] MetricsReporter should not throw exception when there was no
progress reported
## What changes were proposed in this pull request?
`MetricsReporter ` assumes that there has
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r160361785
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -117,6 +117,7 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20023#discussion_r160394589
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/DecimalType.scala ---
@@ -117,6 +117,7 @@ object DecimalType extends AbstractDataType
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20174
@liancheng @liufengdb if the problem affects only the `dropDuplicates`
method, can't we just check there if there are no columns and avoid doing
anything at all in that
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20189
@tdas @zsxwing may I kindly ask if you might take a look at this? Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/20219
[SPARK-23025][SQL] Support Null type in scala reflection
## What changes were proposed in this pull request?
Add support for `Null` type in the `schemaFor` method for Scala reflection
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20219#discussion_r160687957
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
---
@@ -1441,6 +1441,13 @@ class DatasetSuite extends QueryTest with
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20174
I see, thnaks for your answer @liancheng
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20174#discussion_r160772523
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceOperatorSuite.scala
---
@@ -198,6 +198,20 @@ class
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20226
LGTM too
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20189
@zsxwing the test you suggested fails on Jenkins while locally it passes. I
am no sure about the reason since I cannot reproduce it. I had the same test
error in my previous trials. Previously the
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/20023
any more comments @gatorsmile @cloud-fan @dongjoon-hyun @viirya @hvanhovell
?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/19224
kindly ping @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
1 - 100 of 2479 matches
Mail list logo