[
https://issues.apache.org/jira/browse/SPARK-37621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17458516#comment-17458516
]
Ryan Blue commented on SPARK-37621:
---
[~hyukjin.kwon], this affects any source that doesn't always
[
https://issues.apache.org/jira/browse/SPARK-33779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-33779.
---
Fix Version/s: 3.2.0
Resolution: Fixed
Merged PR #30706. Thanks [~aokolnychyi]!
>
Ryan Blue created SPARK-32168:
-
Summary: DSv2 SQL overwrite incorrectly uses static plan with
hidden partitions
Key: SPARK-32168
URL: https://issues.apache.org/jira/browse/SPARK-32168
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-32037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17140825#comment-17140825
]
Ryan Blue commented on SPARK-32037:
---
What about "healthy" and "unhealthy"? That's basically what we
[
https://issues.apache.org/jira/browse/SPARK-31255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-31255:
--
Issue Type: New Feature (was: Bug)
> DataSourceV2: Add metadata columns
>
Ryan Blue created SPARK-31255:
-
Summary: DataSourceV2: Add metadata columns
Key: SPARK-31255
URL: https://issues.apache.org/jira/browse/SPARK-31255
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-29558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979455#comment-16979455
]
Ryan Blue commented on SPARK-29558:
---
Thanks for fixing this, [~cloud_fan]!
> ResolveTables and
[
https://issues.apache.org/jira/browse/SPARK-29558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-29558.
---
Fix Version/s: 3.0.0
Resolution: Fixed
> ResolveTables and ResolveRelations should be
[
https://issues.apache.org/jira/browse/SPARK-29966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979419#comment-16979419
]
Ryan Blue commented on SPARK-29966:
---
As I said on the PR, I'm -1 on changing a public extension API to
[
https://issues.apache.org/jira/browse/SPARK-29900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16974456#comment-16974456
]
Ryan Blue commented on SPARK-29900:
---
To be clear, we think this is going to be a breaking change,
[
https://issues.apache.org/jira/browse/SPARK-29789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-29789.
---
Fix Version/s: 3.0.0
Resolution: Fixed
> should not parse the bucket column name again when
[
https://issues.apache.org/jira/browse/SPARK-29277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-29277.
---
Fix Version/s: 3.0.0
Resolution: Fixed
Fixed by #25955.
> DataSourceV2: Add early filter
[
https://issues.apache.org/jira/browse/SPARK-29592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16962527#comment-16962527
]
Ryan Blue commented on SPARK-29592:
---
There is not currently a way to alter the partition spec for a
Ryan Blue created SPARK-29277:
-
Summary: DataSourceV2: Add early filter and projection pushdown
Key: SPARK-29277
URL: https://issues.apache.org/jira/browse/SPARK-29277
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-29249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-29249:
--
Description: tableProperty should return CreateTableWriter, not
DataFrameWriterV2.
>
Ryan Blue created SPARK-29249:
-
Summary: DataFrameWriterV2 should not allow setting table
properties for existing tables
Key: SPARK-29249
URL: https://issues.apache.org/jira/browse/SPARK-29249
Project:
Ryan Blue created SPARK-29157:
-
Summary: DataSourceV2: Add DataFrameWriterV2 to Python API
Key: SPARK-29157
URL: https://issues.apache.org/jira/browse/SPARK-29157
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-29014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16926889#comment-16926889
]
Ryan Blue commented on SPARK-29014:
---
[~cloud_fan], why does this require a major refactor?
It would
[
https://issues.apache.org/jira/browse/SPARK-28970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16924681#comment-16924681
]
Ryan Blue commented on SPARK-28970:
---
I think we should, yes.
> implement USE CATALOG/NAMESPACE for
Ryan Blue created SPARK-29014:
-
Summary: DataSourceV2: Clean up current, default, and session
catalog uses
Key: SPARK-29014
URL: https://issues.apache.org/jira/browse/SPARK-29014
Project: Spark
Ryan Blue created SPARK-28979:
-
Summary: DataSourceV2: Rename UnresolvedTable
Key: SPARK-28979
URL: https://issues.apache.org/jira/browse/SPARK-28979
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-28899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-28899.
---
Fix Version/s: 3.0.0
Resolution: Fixed
> merge the testing in-memory v2 catalogs from
Ryan Blue created SPARK-28878:
-
Summary: DataSourceV2 should not insert extra projection for
columnar batches
Key: SPARK-28878
URL: https://issues.apache.org/jira/browse/SPARK-28878
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-28846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-28846.
---
Resolution: Duplicate
> Set OMP_NUM_THREADS to executor cores for python
>
[
https://issues.apache.org/jira/browse/SPARK-28843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-28843:
--
Description:
While testing hardware with more cores, we found that the amount of memory
required by
Ryan Blue created SPARK-28843:
-
Summary: Set OMP_NUM_THREADS to executor cores reduce Python
memory consumption
Key: SPARK-28843
URL: https://issues.apache.org/jira/browse/SPARK-28843
Project: Spark
Ryan Blue created SPARK-28628:
-
Summary: Support namespaces in V2SessionCatalog
Key: SPARK-28628
URL: https://issues.apache.org/jira/browse/SPARK-28628
Project: Spark
Issue Type: Bug
Ryan Blue created SPARK-28612:
-
Summary: DataSourceV2: Add new DataFrameWriter API for v2
Key: SPARK-28612
URL: https://issues.apache.org/jira/browse/SPARK-28612
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-23204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-23204.
---
Resolution: Fixed
Fix Version/s: 3.0.0
I'm closing this because it is implemented by
[
https://issues.apache.org/jira/browse/SPARK-25280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899532#comment-16899532
]
Ryan Blue commented on SPARK-25280:
---
[~hyukjin.kwon], is there anything left to do for this? I think
[
https://issues.apache.org/jira/browse/SPARK-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16888222#comment-16888222
]
Ryan Blue commented on SPARK-14543:
---
{{byName}} was never added to Apache Spark. The change was
[
https://issues.apache.org/jira/browse/SPARK-28376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885428#comment-16885428
]
Ryan Blue commented on SPARK-28376:
---
I don't think this is a regression. The linked issue was to
Ryan Blue created SPARK-28374:
-
Summary: DataSourceV2: Add method to support INSERT ... IF NOT
EXISTS
Key: SPARK-28374
URL: https://issues.apache.org/jira/browse/SPARK-28374
Project: Spark
Ryan Blue created SPARK-28319:
-
Summary: DataSourceV2: Support SHOW TABLES
Key: SPARK-28319
URL: https://issues.apache.org/jira/browse/SPARK-28319
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-28219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877172#comment-16877172
]
Ryan Blue commented on SPARK-28219:
---
I'm closing this as a duplicate. Please use SPARK-27708.
If you
[
https://issues.apache.org/jira/browse/SPARK-28219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-28219.
---
Resolution: Duplicate
> Data source v2 user guide
> -
>
>
[
https://issues.apache.org/jira/browse/SPARK-28192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875046#comment-16875046
]
Ryan Blue commented on SPARK-28192:
---
It sounds like what you want is for a source to be able to
Ryan Blue created SPARK-28139:
-
Summary: DataSourceV2: Add AlterTable v2 implementation
Key: SPARK-28139
URL: https://issues.apache.org/jira/browse/SPARK-28139
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-27857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27857:
--
Summary: DataSourceV2: Support ALTER TABLE statements in catalyst SQL
parser (was: DataSourceV2:
Ryan Blue created SPARK-27965:
-
Summary: Add extractors for logical transforms
Key: SPARK-27965
URL: https://issues.apache.org/jira/browse/SPARK-27965
Project: Spark
Issue Type: Bug
Ryan Blue created SPARK-27964:
-
Summary: Create CatalogV2Util
Key: SPARK-27964
URL: https://issues.apache.org/jira/browse/SPARK-27964
Project: Spark
Issue Type: Bug
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-27919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27919:
--
Affects Version/s: (was: 2.4.3)
3.0.0
> DataSourceV2: Add v2 session
Ryan Blue created SPARK-27960:
-
Summary: DataSourceV2 ORC implementation doesn't handle schemas
correctly
Key: SPARK-27960
URL: https://issues.apache.org/jira/browse/SPARK-27960
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-27960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16856955#comment-16856955
]
Ryan Blue commented on SPARK-27960:
---
[~Gengliang.Wang], FYI
> DataSourceV2 ORC implementation doesn't
Ryan Blue created SPARK-27919:
-
Summary: DataSourceV2: Add v2 session catalog
Key: SPARK-27919
URL: https://issues.apache.org/jira/browse/SPARK-27919
Project: Spark
Issue Type: Bug
Ryan Blue created SPARK-27909:
-
Summary: Fix CTE substitution dependence on ResolveRelations
throwing AnalysisException
Key: SPARK-27909
URL: https://issues.apache.org/jira/browse/SPARK-27909
Project:
Ryan Blue created SPARK-27857:
-
Summary: DataSourceV2: Support ALTER TABLE statements
Key: SPARK-27857
URL: https://issues.apache.org/jira/browse/SPARK-27857
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-27784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844404#comment-16844404
]
Ryan Blue commented on SPARK-27784:
---
[~cloud_fan], I don't see this happening in master because an
[
https://issues.apache.org/jira/browse/SPARK-27784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27784:
--
Description:
This is a correctness bug when reusing a set of project expressions in the
DataFrame
Ryan Blue created SPARK-27784:
-
Summary: Alias ID reuse can break correctness when substituting
foldable expressions
Key: SPARK-27784
URL: https://issues.apache.org/jira/browse/SPARK-27784
Project: Spark
Ryan Blue created SPARK-27732:
-
Summary: DataSourceV2: Add CreateTable logical operation
Key: SPARK-27732
URL: https://issues.apache.org/jira/browse/SPARK-27732
Project: Spark
Issue Type:
Ryan Blue created SPARK-27724:
-
Summary: Add RTAS logical operation
Key: SPARK-27724
URL: https://issues.apache.org/jira/browse/SPARK-27724
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-27724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27724:
--
Summary: DataSourceV2: Add RTAS logical operation (was: Add RTAS logical
operation)
> DataSourceV2:
[
https://issues.apache.org/jira/browse/SPARK-24923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-24923:
--
Summary: DataSourceV2: Add CTAS logical operation (was: DataSourceV2: Add
CTAS and RTAS logical
Ryan Blue created SPARK-27708:
-
Summary: Add documentation for v2 data sources
Key: SPARK-27708
URL: https://issues.apache.org/jira/browse/SPARK-27708
Project: Spark
Issue Type: Improvement
Ryan Blue created SPARK-27693:
-
Summary: DataSourceV2: Add default catalog property
Key: SPARK-27693
URL: https://issues.apache.org/jira/browse/SPARK-27693
Project: Spark
Issue Type: Improvement
Ryan Blue created SPARK-27661:
-
Summary: Add SupportsNamespaces interface for v2 catalogs
Key: SPARK-27661
URL: https://issues.apache.org/jira/browse/SPARK-27661
Project: Spark
Issue Type:
Ryan Blue created SPARK-27658:
-
Summary: Catalog API to load functions
Key: SPARK-27658
URL: https://issues.apache.org/jira/browse/SPARK-27658
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-23098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835700#comment-16835700
]
Ryan Blue commented on SPARK-23098:
---
I don't think there's a DSv2-related obstacle to implementing
[
https://issues.apache.org/jira/browse/SPARK-27471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822271#comment-16822271
]
Ryan Blue commented on SPARK-27471:
---
Thanks [~hyukjin.kwon]. I meant to set the target version, not
[
https://issues.apache.org/jira/browse/SPARK-27471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27471:
--
Target Version/s: 3.0.0
> Reorganize public v2 catalog API
>
>
>
Ryan Blue created SPARK-27471:
-
Summary: Reorganize public v2 catalog API
Key: SPARK-27471
URL: https://issues.apache.org/jira/browse/SPARK-27471
Project: Spark
Issue Type: Improvement
Ryan Blue created SPARK-27386:
-
Summary: Improve partition transform parsing
Key: SPARK-27386
URL: https://issues.apache.org/jira/browse/SPARK-27386
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-25006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-25006.
---
Resolution: Won't Fix
Closing this because SPARK-26946 replaces it.
> Add optional catalog to
Ryan Blue created SPARK-27181:
-
Summary: Add public expression and transform API for DSv2
partitioning
Key: SPARK-27181
URL: https://issues.apache.org/jira/browse/SPARK-27181
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-26778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16792843#comment-16792843
]
Ryan Blue commented on SPARK-26778:
---
[~Gengliang.Wang], can you clarify what this issue is tracking?
Ryan Blue created SPARK-27108:
-
Summary: Add parsed CreateTable plans to Catalyst
Key: SPARK-27108
URL: https://issues.apache.org/jira/browse/SPARK-27108
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-27067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-27067.
---
Resolution: Fixed
I'm resolving this issue because the vote to adopt the proposal passed.
I've
[
https://issues.apache.org/jira/browse/SPARK-27067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27067:
--
Attachment: SPIP_ Spark API for Table Metadata.pdf
> SPIP: Catalog API for table metadata
>
[
https://issues.apache.org/jira/browse/SPARK-27066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27066:
--
Description:
Goals:
* Propose semantics for identifiers and a listing API to support multiple
Ryan Blue created SPARK-27066:
-
Summary: SPIP: Identifiers for multi-catalog support
Key: SPARK-27066
URL: https://issues.apache.org/jira/browse/SPARK-27066
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-27067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27067:
--
Description: Goal: Define a catalog API to create, alter, load, and drop
tables
> SPIP: Catalog API
Ryan Blue created SPARK-27067:
-
Summary: SPIP: Catalog API for table metadata
Key: SPARK-27067
URL: https://issues.apache.org/jira/browse/SPARK-27067
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-27066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-27066.
---
Resolution: Fixed
I'm resolving this issue because the vote to adopt the proposal passed.
I've
[
https://issues.apache.org/jira/browse/SPARK-23521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784736#comment-16784736
]
Ryan Blue commented on SPARK-23521:
---
I've turned off commenting on the google doc to preserve its
[
https://issues.apache.org/jira/browse/SPARK-27066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-27066:
--
Attachment: SPIP_ Identifiers for multi-catalog Spark.pdf
> SPIP: Identifiers for multi-catalog
[
https://issues.apache.org/jira/browse/SPARK-23521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-23521:
--
Attachment: SPIP_ Standardize logical plans.pdf
> SPIP: Standardize SQL logical plans with
[
https://issues.apache.org/jira/browse/SPARK-26874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-26874:
--
Summary: With PARQUET-1414, Spark can erroneously write empty pages (was:
When we upgrade Parquet to
[
https://issues.apache.org/jira/browse/SPARK-26874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16767798#comment-16767798
]
Ryan Blue commented on SPARK-26874:
---
To be clear, Parquet has not released any 1.11.x versions so this
Ryan Blue created SPARK-26873:
-
Summary: FileFormatWriter creates inconsistent MR job IDs
Key: SPARK-26873
URL: https://issues.apache.org/jira/browse/SPARK-26873
Project: Spark
Issue Type: Bug
Ryan Blue created SPARK-26811:
-
Summary: Add DataSourceV2 capabilities to check support for batch
append, overwrite, truncate during analysis.
Key: SPARK-26811
URL: https://issues.apache.org/jira/browse/SPARK-26811
[
https://issues.apache.org/jira/browse/SPARK-26677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16756654#comment-16756654
]
Ryan Blue commented on SPARK-26677:
---
Thanks, sorry about the mistake.
> Incorrect results of
[
https://issues.apache.org/jira/browse/SPARK-26677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-26677:
--
Fix Version/s: 2.4.1
> Incorrect results of not(eqNullSafe) when data read from Parquet file
>
[
https://issues.apache.org/jira/browse/SPARK-26677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16752606#comment-16752606
]
Ryan Blue commented on SPARK-26677:
---
To clarify [~dongjoon]'s comment: All recent versions of Parquet
Ryan Blue created SPARK-26682:
-
Summary: Task attempt ID collision causes lost data
Key: SPARK-26682
URL: https://issues.apache.org/jira/browse/SPARK-26682
Project: Spark
Issue Type: Improvement
Ryan Blue created SPARK-26681:
-
Summary: Support Ammonite scopes in OuterScopes
Key: SPARK-26681
URL: https://issues.apache.org/jira/browse/SPARK-26681
Project: Spark
Issue Type: Improvement
Ryan Blue created SPARK-26679:
-
Summary: Deconflict spark.executor.pyspark.memory and
spark.python.worker.memory
Key: SPARK-26679
URL: https://issues.apache.org/jira/browse/SPARK-26679
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-23398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-23398.
---
Resolution: Fixed
SPARK-25528 adds a Table interface that can report its schema.
> DataSourceV2
Ryan Blue created SPARK-2:
-
Summary: DataSourceV2: Add overwrite and dynamic overwrite.
Key: SPARK-2
URL: https://issues.apache.org/jira/browse/SPARK-2
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-23321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-23321.
---
Resolution: Fixed
Done for Append plans. Will be included in new logical plans as they are added.
[
https://issues.apache.org/jira/browse/SPARK-25966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678728#comment-16678728
]
Ryan Blue commented on SPARK-25966:
---
[~andrioni], were there any failed tasks or executors in the job
[
https://issues.apache.org/jira/browse/SPARK-25531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629418#comment-16629418
]
Ryan Blue commented on SPARK-25531:
---
[~cloud_fan], what was the intent for this umbrella issue? You
[
https://issues.apache.org/jira/browse/SPARK-23521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-23521.
---
Resolution: Fixed
Marking this as "FIxed" because the vote passed.
> SPIP: Standardize SQL logical
[
https://issues.apache.org/jira/browse/SPARK-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue resolved SPARK-15420.
---
Resolution: Won't Fix
> Repartition and sort before Parquet writes
>
[
https://issues.apache.org/jira/browse/SPARK-15420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ryan Blue updated SPARK-15420:
--
Target Version/s: (was: 2.4.0)
> Repartition and sort before Parquet writes
>
[
https://issues.apache.org/jira/browse/SPARK-25213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590531#comment-16590531
]
Ryan Blue commented on SPARK-25213:
---
Sorry, I just realized the point is that the filter could have a
[
https://issues.apache.org/jira/browse/SPARK-25213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16590406#comment-16590406
]
Ryan Blue commented on SPARK-25213:
---
[~cloud_fan], that PR ensures that there is a Project node on top
[
https://issues.apache.org/jira/browse/SPARK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589088#comment-16589088
]
Ryan Blue commented on SPARK-25188:
---
One update to that proposal: {{BatchOverwriteSupport}} should be
[
https://issues.apache.org/jira/browse/SPARK-25188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589086#comment-16589086
]
Ryan Blue commented on SPARK-25188:
---
Here's the original proposal for adding a write config:
The read
[
https://issues.apache.org/jira/browse/SPARK-25190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589070#comment-16589070
]
Ryan Blue commented on SPARK-25190:
---
The main problem I have with the current pushdown API is that
1 - 100 of 301 matches
Mail list logo