Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
Build started: [TESTS] `ALL`
[![PR-16397](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=A54F518D-4D20-424F-95B6-3641C55CFBC1&svg=true)](https://ci.appveyor
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16405
[SPARK-19002][BUILD] Check pep8 against merge_spark_pr.py script
## What changes were proposed in this pull request?
This PR proposes to check pep8 against `merge_spark_pr.py` script
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
Hi @srowen and @holdenk, this is a small PR to run pep8 against
`merge_spark_pr.py`. Could I ask if it makes sense please?
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
Hm, this was passed on my local.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
Hi @srowen and @holden, this is a small minor PR to check pep8 against
`./dev/merge_spark_pr.py`. Could you check if it makes sense please?
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16405#discussion_r93886392
--- Diff: dev/lint-python ---
@@ -23,6 +23,7 @@ PATHS_TO_CHECK="./python/pyspark/
./examples/src/main/python/ ./dev/sparktestsup
# TODO: fix
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
Ah, this seems complaining in Python 3.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
Here, I concatenated all the logs into single file -
https://gist.github.com/HyukjinKwon/58567451773f87322c7009007e4fdc34
I just found each in PR description.
---
If your project is
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
cc @srowen, could I please ask to review this one?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16413
Hi @Kevy123, it seems this pull request is mistakenly open. Could you
please close this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
Sure, let me double check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16386
Only regarding the comment,
https://github.com/apache/spark/pull/16386#issuecomment-269386229, I have a
similar (rather combined) idea that we provide another option such as corrupt
file name
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
It seems some existing examples such as `random_rdd_generation.py` do not
work with Python 3.3.6 too although it complies fine so that pep8 check can be
passed. I fixed only the errors from
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
BTW, anyone tried Python 3.6.0 with PySpark? I could not even run
`./bin/pyspark` appeartly with an error.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16397#discussion_r94024218
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala ---
@@ -80,7 +80,7 @@ class MultiDatabaseSuite extends QueryTest
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16397#discussion_r94024588
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
---
@@ -257,31 +257,37 @@ class HiveCommandSuite extends
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
Ah, thank you for approving @srowen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16397#discussion_r94030166
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala ---
@@ -80,7 +80,7 @@ class MultiDatabaseSuite extends QueryTest
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
Build started: [TESTS] `ALL`
[![PR-16397](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=443B17ED-C621-4A3A-B45A-1F5E042189A2&svg=true)](https://ci.appveyor
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
Build started: [TESTS] `ALL`
[![PR-16397](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=F9490ECC-9D49-44C8-8CDE-7BCA9C1FD88C&svg=true)](https://ci.appveyor
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16429
[WIP][SPARK-19019][PYTHON] Fix hijected collections.namedtuple to be
serialized with keyword-only arguments
## What changes were proposed in this pull request?
Currently, PySpark does
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16429
cc @davies and @JoshRosen. I know both of you are insightful in this area.
I am not too sure if this is a correct fix as it seems not even fixed in some
other Python thirdparty libraries. Do
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
I just checked each is fine in a concatenated log file -
https://gist.github.com/HyukjinKwon/8851815ede9dcae80632a5378b74d1ae
---
If your project is set up for it, you can reply to this email
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16433
[SPARK-19022][TESTS] Fix tests dependent on OS due to different newline
characters
## What changes were proposed in this pull request?
There are two tests failing on Windows due to
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16433
Build started: [TESTS]
`org.apache.spark.sql.streaming.StreamingQueryStatusAndProgressSuite`
[![PR-16433](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=AE40452F
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16428
Do you mind if I ask wheather it writes the line separstor correctly as the
encoding specified in the option?
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16433#discussion_r94200602
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryStatusAndProgressSuite.scala
---
@@ -30,10 +30,16 @@ import
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16433
In most cases, it seems they explicitly write `\n` (e.g. writing CSV and
JSON). _Apparently_, these seem only tests being failed due to this problem
---
If your project is set up for it, you
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
@srowen, thank you Sean. I think it is okay for now. To be honest, I found
some more same instances but I haven't fixed, tested and verified them yet.
Maybe, I need one more go to deal
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
I just manually ran `./dev/create-release/translate-contributors.py` which
had a conflict for sure.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16433
Build started: [TESTS]
`org.apache.spark.sql.streaming.StreamingQueryStatusAndProgressSuite`
[![PR-16433](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=D1A3B54F
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16429#discussion_r94201674
--- Diff: python/pyspark/serializers.py ---
@@ -382,18 +382,30 @@ def _hijack_namedtuple():
return
global _old_namedtuple
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
retest yhis please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16405
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16428
BTW, the reason I asked that in
https://github.com/apache/spark/pull/16428#issuecomment-269635303 is I remember
that I checked the reading/writing paths related with encodings before and the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
It was a problem because I could not proceed further because the error
messages were flooding and somehow the logs were truncated in AppVeyor (e.g.
https://ci.appveyor.com/project/spark-test
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16433
Yes, I hesitated to submit this PR for a while due to similar concerns..
> is it because this is the only test for prettyJson?
I believe so. Let me double check ag
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
Thank you @srowen !!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16428
Ah, I meant to add a test there in this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94238157
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
---
@@ -71,7 +71,9 @@ private[csv] class CSVOptions
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94239452
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -573,6 +573,7 @@ final class DataFrameWriter[T] private[sql](ds
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16405#discussion_r94263510
--- Diff:
examples/src/main/python/mllib/decision_tree_regression_example.py ---
@@ -44,7 +44,7 @@
# Evaluate model on test instances and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16405#discussion_r94263914
--- Diff: dev/lint-python ---
@@ -19,10 +19,8 @@
SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )"
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16405#discussion_r94273247
--- Diff: dev/lint-python ---
@@ -19,10 +19,8 @@
SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )"
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16405#discussion_r94273331
--- Diff: dev/lint-python ---
@@ -19,10 +19,8 @@
SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )"
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273423
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -659,7 +659,7 @@ def text(self, path, compression=None):
self._jwrite.text(path
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273531
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -33,6 +33,7 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273548
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -573,6 +573,7 @@ final class DataFrameWriter[T] private[sql](ds
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273678
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -905,4 +906,21 @@ class CSVSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273677
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -677,6 +677,8 @@ def csv(self, path, mode=None, compression=None,
sep=None, quote=None, escape=No
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273685
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -905,4 +906,21 @@ class CSVSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273737
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -573,6 +573,7 @@ final class DataFrameWriter[T] private[sql](ds
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16428#discussion_r94273866
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -573,6 +573,7 @@ final class DataFrameWriter[T] private[sql](ds
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16433
I just double checked. It seems `org.json4s.pretty` is being used in
several places but they look for debugging purpose, printing purpose and making
a request body (e.g
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15848
@skanjila We would be able to close this if there are no updates for now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
@srowen, Otherwise, I think I am able to open a `[WIP]` or `[DO-NOT-MERGE]`
PR and then push & test again and again some commits fixing these rather than
trying to only verify these via
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
Yes, I think I am almost there and am fixing these although these are
slightly more than I expected before due to some errors I didn't think were
caused by this issue su
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16397
BTW, thanks again for your quick response.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16451
[WIP][SPARK-18922][SQL][CORE][TESTS] Fix all identified tests failed due to
path and resources problems on Windows
## What changes were proposed in this pull request?
WIP - just
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Build started: [TESTS] `ALL`
[![PR-16451](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=044D6A78-26AA-4A2C-A4A1-B39DF60C811C&svg=true)](https://ci.appveyor
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16429
gentle ping..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16320#discussion_r94358447
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -85,7 +85,9 @@ private[csv] object
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16429
Thanks for your interests @azmras. I just checked it as below:
```python
sc.parallelize(range(100), 8)
```
```
Traceback (most recent call last):
File
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16429
Hi @joshrosen and @davies, do you think that should be ported in this PR? I
am worried of making this PR harder by porting it here.
---
If your project is set up for it, you can reply to this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16429
Hi @azmras, now it should work fine for your case as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16429
@azmras Could you maybe double check? It works okay in my local as below:
```
Welcome to
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Build started: [TESTS]
`org.apache.spark.streaming.kafka.ReliableKafkaStreamSuite`
[![PR-16451](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=A2836427-A94C-4BE0
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Build started: [TESTS]
`org.apache.spark.streaming.kafka.DirectKafkaStreamSuite`
[![PR-16451](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=1C2B248D-2455-4ADB-AC8A
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Build started: [TESTS]
`org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite`
[![PR-16451](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=887C39EC-849A-40E5
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Build started: [TESTS]
`org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite`
[![PR-16451](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=E8488472-738C-4ADF
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16429
@azmras Thank you for confirming this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Build started: [TESTS]
`org.apache.spark.streaming.kafka010.DirectKafkaStreamSuite`
[![PR-16451](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=A7615F8B-58B0-4D9B
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Now, there are 30ish tests failed on Windows which I could identify via
AppVeyor tests
[here](https://gist.github.com/HyukjinKwon/88a0b37cd027934bc14f3aa9f812be31)
which I am currently working
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Build started: [TESTS]
`org.apache.spark.sql.hive.PartitionedTablePerfStatsSuite`
[![PR-16451](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=0C0F228B-9B67-49AC-9C35
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94562021
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -482,7 +482,7 @@ class UtilsSuite extends SparkFunSuite with
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94561930
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1485,17 +1485,18 @@ private[spark] object Utils extends Logging
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94564561
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/receiver/ReceiverSupervisorImpl.scala
---
@@ -175,6 +175,12 @@ private[streaming] class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94562979
--- Diff:
external/kafka-0-10/src/test/scala/org/apache/spark/streaming/kafka010/DirectKafkaStreamSuite.scala
---
@@ -372,7 +367,7 @@ class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94562260
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala
---
@@ -138,10 +139,15 @@ class KafkaTestUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94563476
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -222,25 +223,34 @@ case class LoadDataCommand(
val
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94564162
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSparkSubmitSuite.scala ---
@@ -339,10 +339,15 @@ class HiveSparkSubmitSuite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94562756
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala
---
@@ -374,8 +380,15 @@ class KafkaTestUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94563417
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -222,25 +223,34 @@ case class LoadDataCommand(
val
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Hi @srowen, do you mind if I ask to check whether the changes look
reasonable? (I will double check if the tests are really passed after the tests
above are finished.)
---
If your project is
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Now, there are 30ish tests failed on Windows which I could identify via
AppVeyor tests
[here](https://gist.github.com/HyukjinKwon/88a0b37cd027934bc14f3aa9f812be31)
which I am currently working
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94569924
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala
---
@@ -138,10 +139,15 @@ class KafkaTestUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94575201
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala
---
@@ -138,10 +139,15 @@ class KafkaTestUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94575266
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala
---
@@ -374,8 +380,15 @@ class KafkaTestUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94575575
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSparkSubmitSuite.scala ---
@@ -339,10 +339,15 @@ class HiveSparkSubmitSuite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16451#discussion_r94575683
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala
---
@@ -138,10 +139,15 @@ class KafkaTestUtils
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16451
Let me just push a small commit fixing the _"not"_ condition just mainly to
retrigger the test.
---
If your project is set up for it, you can reply to this email and have your
reply
301 - 400 of 12634 matches
Mail list logo