Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/16030
Or the thing that we should fix here is that if a partition column is found
also as part of the dataSchema, to throw an exception.
---
If your project is set up for it, you can reply to this email
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/16030
@maropu I wouldn't say this is a regression. I would say that this working
for 2.0.2 was a bug in 2.0.2. If you want the column `a` to be interpreted as a
`LongType` instead of `IntegerType`
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/16030#discussion_r89845728
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetPartitionDiscoverySuite.scala
---
@@ -969,4 +969,15 @@ class
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15896
@hvanhovell Done! Thanks for the quick review!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15896
Hey @andrewor14 ! I went with your way. @hvanhovell can you take a quick
look please? I would really like this to be available in Spark 2.1 (even though
it is a new API)
---
If your project is set
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r89030586
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -84,30 +84,95 @@ case class DataSource(
private
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r89030610
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -84,30 +84,95 @@ case class DataSource(
private
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r89030487
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -84,30 +84,95 @@ case class DataSource(
private
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r89030464
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -84,30 +84,95 @@ case class DataSource(
private
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15951
Thanks @tejasapatil for the review. Addressed your comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15951
@ericl I feel that would probably break 90% of production Spark jobs out
there, therefore am a bit scared of something radical. I agree, it's confusing
and annoying
---
If your project is s
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15949
Left several comments that all are related to each other. @zsxwing I would
like your feedback on those as well in order to not make @tcondie make too many
changes flipping timestamp precision
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15949#discussion_r89025767
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -344,8 +370,11 @@ class StreamExecution
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15949#discussion_r89025593
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -72,6 +72,26 @@ case class CurrentTimestamp
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15949#discussion_r89025088
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -72,6 +72,26 @@ case class CurrentTimestamp
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15921
Hallelujah! @zsxwing shall we merge this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15949#discussion_r89017675
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -38,6 +40,26 @@ import
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15949#discussion_r89017544
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -422,6 +451,8 @@ class StreamExecution(
val
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15949#discussion_r89017366
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -72,6 +72,28 @@ case class CurrentTimestamp
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15921
thanks @gatorsmile and @tdas. I addressed your comments. The semantics look
a lot cleaner now. That doesn't still mean it's clean though :P
---
If your project is set up for it, you ca
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15942
Closing this in favor of #15951
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user brkyvz closed the pull request at:
https://github.com/apache/spark/pull/15942
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r89012272
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -291,22 +360,24 @@ case class DataSource
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r89007519
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -84,30 +84,96 @@ case class DataSource(
private
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r88994472
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -84,30 +84,90 @@ case class DataSource(
private
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15949#discussion_r88941686
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -422,6 +432,7 @@ class StreamExecution(
val
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r88934282
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -272,14 +309,20 @@ case class DataSource
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15951#discussion_r88934128
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -573,4 +573,39 @@ class DataFrameReaderWriterSuite
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15951
True. But there's no reason "part" and "id" can't be strings right?
On Nov 21, 2016 12:16 AM, "Xiao Li" wrote:
> The real issue is that a
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15951
cc @rxin @marmbrus Don't know who's the best person to look at this, but
git blame sais I mainly changed your code :)
---
If your project is set up for it, you can reply to this emai
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15951
[SPARK-18510] Fix data corruption from inferred partition column dataTypes
## What changes were proposed in this pull request?
### The Issue
If I specify my schema when doing
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15942
[SPARK-18407] Inferred partition columns cause assertion error in
StructuredStreaming
## What changes were proposed in this pull request?
It turns out we are a bit enthusiastic when
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15730
Hi @WeichenXu123 Thank you for this PR. Sorry for taking so long to get
back to you. Your optimization would be very helpful. I have a couple thoughts
though. Your examples always take into account
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15896
On hold on my side. Will try to get back to it
On Nov 17, 2016 3:31 PM, "Dongjoon Hyun" wrote:
> Hi, @brkyvz <https://github.com/brkyvz> and @gatorsmile
&g
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15921
cc @davies for PySpark changes
cc @liancheng for `checkpoint` API and javadoc update
cc @marmbrus for `withWatermark` API. Question here: should we throw an
analysis exception if the Dataset
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15921
Add missing python APIs: withWatermark and checkpoint to dataframe
## What changes were proposed in this pull request?
This PR adds two of the newly added methods of `Dataset`s to Python
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15896
@gatorsmile Talking offline with several people, I may put this PR on hold
for now since it is a behavior change. I guess it would be better to go with
Options 1 or 2 that I defined in the PR
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15909
[SPARK-18475] Be able to increase parallelism in StructuredStreaming Kafka
source
## What changes were proposed in this pull request?
This PR adds the configuration `numPartitions` to the
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15896
cc @gatorsmile I changed a test you added. Do you have any strong feelings
on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15896
[SPARK-18465] Uncache table shouldn't throw an exception when table doesn't
exist
## What changes were proposed in this pull request?
While this behavior is debatable, co
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15801
@tdas Addressed your comments. Test time increased to 2.5 seconds though,
fyi.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15801
@tdas Added test for the data as well
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15815
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15815
@anabranch I don't see how the documentation was wrong. The second argument
doesn't take the seed as a parameter, therefore the seed is random
---
If your project is set up for it, you ca
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15804
@tdas Addressed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15806
I think the change you should make should be for:
https://github.com/apache/spark/pull/15806/files#diff-7dc261474784c6402f7020ffe7f61038R212
where you always append `file:///` otherwise, it
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15806
Hi @oza, thank you for this patch, however I'm not sure if this patch fixes
anything. What was your problem, what didn't work for you?
---
If your project is set up for it, you can rep
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15806#discussion_r86933542
--- Diff:
examples/src/main/scala/org/apache/spark/examples/sql/streaming/StructuredNetworkWordCount.scala
---
@@ -68,6 +70,8 @@ object
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15806#discussion_r86933461
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQueryManager.scala
---
@@ -219,10 +219,11 @@ class StreamingQueryManager private
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15804
cc @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15804
[SPARK-18342] Make rename failures fatal in HDFSBackedStateStore
## What changes were proposed in this pull request?
If the rename operation in the state store fails (`fs.rename` returns
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15786#discussion_r86903334
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/memory.scala
---
@@ -212,4 +212,8 @@ class MemorySink(val schema: StructType
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15801
[SPARK-18337] Complete mode memory sinks should be able to recover from
checkpoints
## What changes were proposed in this pull request?
It would be nice if memory sinks can also recover
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15786
Thanks @lw-lin ! Left one last comment. @davies Can you also please take a
look? I think you implemented most of the statistics code
---
If your project is set up for it, you can reply to this
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15786#discussion_r86667129
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/MemorySinkSuite.scala ---
@@ -187,6 +187,31 @@ class MemorySinkSuite extends StreamTest with
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15771
@marmbrus Addressed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15771
[SPARK-18260] Make from_json null safe
## What changes were proposed in this pull request?
`from_json` is currently not safe against `null` rows. This PR adds a fix
and a regression test
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15702
A very dumb question (I apologize), there is nothing stopping a user to
actually use processing time as watermarks with this API either. One can easily
do `df.withColumn("time
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15632
@srowen I think it's a matter of how fast upstream publishes a new version,
and we can make sure that everything works
---
If your project is set up for it, you can reply to this email and
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/14553
LGTM as well!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/14553#discussion_r84811441
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/memory.scala
---
@@ -111,6 +126,23 @@ case class MemoryStream[A : Encoder](id
Github user brkyvz closed the pull request at:
https://github.com/apache/spark/pull/15470
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15235
@petermaxlee Thanks for making the change. This LGTM now that the fix for
`SPARK-17599` is in the right place. The rest is just moving around,
consolidating old code.
---
If your project is set up
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15470
cc @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15470
[SPARK-17921] failfast on checkpointLocation specified for memory streams
## What changes were proposed in this pull request?
The checkpointLocation option in memory streams in
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15437
Thanks @zsxwing addressed your comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82890911
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamMetrics.scala
---
@@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82890581
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -516,12 +563,127 @@ class StreamExecution
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82890465
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -105,11 +105,21 @@ class StreamExecution(
var
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82890127
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15437
cc @tdas @zsxwing Would one of you want to look at this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82872469
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -176,7 +184,9 @@ class StreamExecution
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15437
[SPARK-17876] Write StructuredStreaming WAL to a stream instead of
materializing all at once
## What changes were proposed in this pull request?
The CompactibleFileStreamLog materializes
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82709451
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamMetrics.scala
---
@@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82709478
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamMetrics.scala
---
@@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82709228
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamMetrics.scala
---
@@ -0,0 +1,244 @@
+/*
+ * Licensed to the Apache
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82708496
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -530,7 +692,7 @@ class StreamExecution(
case
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82708346
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -221,8 +247,15 @@ class StreamExecution
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82708289
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -176,7 +184,9 @@ class StreamExecution
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82708317
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -176,7 +184,9 @@ class StreamExecution
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82708148
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -176,7 +184,9 @@ class StreamExecution
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82707725
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulAggregate.scala
---
@@ -56,7 +57,12 @@ case class StateStoreRestoreExec
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82707355
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82707181
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82707120
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82706974
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82706923
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82706830
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82706767
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15307#discussion_r82706656
--- Diff: python/pyspark/sql/streaming.py ---
@@ -189,6 +189,282 @@ def resetTerminated(self):
self._jsqm.resetTerminated
Github user brkyvz closed the pull request at:
https://github.com/apache/spark/pull/15380
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/15380
cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user brkyvz opened a pull request:
https://github.com/apache/spark/pull/15380
Backport [SPARK-15062][SQL] fix list type infer serializer issue
## What changes were proposed in this pull request?
This backports
https://github.com/apache/spark/commit
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15235#discussion_r80582937
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -82,73 +85,185 @@ class ListingFileCatalog
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15235#discussion_r80583033
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalogSuite.scala
---
@@ -0,0 +1,34 @@
+/*
+ * Licensed to
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15235#discussion_r80582702
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -82,73 +85,185 @@ class ListingFileCatalog
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15235#discussion_r80583392
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -82,73 +83,177 @@ class ListingFileCatalog
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15235#discussion_r80573942
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -82,73 +85,185 @@ class ListingFileCatalog
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15235#discussion_r80583326
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -82,73 +85,185 @@ class ListingFileCatalog
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15235#discussion_r80574449
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -82,73 +85,185 @@ class ListingFileCatalog
601 - 700 of 1226 matches
Mail list logo