Davies Liu created SPARK-12288:
--
Summary: Support UnsafeRow in Coalesce/Except/Intersect
Key: SPARK-12288
URL: https://issues.apache.org/jira/browse/SPARK-12288
Project: Spark
Issue Type: Improv
[
https://issues.apache.org/jira/browse/SPARK-12287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-12287:
---
Issue Type: Improvement (was: Epic)
> Support UnsafeRow in MapPartitions/MapGroups/CoGroup
> ---
Davies Liu created SPARK-12287:
--
Summary: Support UnsafeRow in MapPartitions/MapGroups/CoGroup
Key: SPARK-12287
URL: https://issues.apache.org/jira/browse/SPARK-12287
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-12286:
--
Assignee: Davies Liu
> Support UnsafeRow in all SparkPlan (if possible)
>
Davies Liu created SPARK-12286:
--
Summary: Support UnsafeRow in all SparkPlan (if possible)
Key: SPARK-12286
URL: https://issues.apache.org/jira/browse/SPARK-12286
Project: Spark
Issue Type: Epic
Davies Liu created SPARK-12284:
--
Summary: Output UnsafeRow from window function
Key: SPARK-12284
URL: https://issues.apache.org/jira/browse/SPARK-12284
Project: Spark
Issue Type: Improvement
Davies Liu created SPARK-12283:
--
Summary: Use UnsafeRow as the buffer in SortBasedAggregation to
avoid Unsafe/Safe conversion
Key: SPARK-12283
URL: https://issues.apache.org/jira/browse/SPARK-12283
Proje
[
https://issues.apache.org/jira/browse/SPARK-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11885.
Resolution: Fixed
Fix Version/s: 1.5.3
> UDAF may nondeterministically generate wrong result
[
https://issues.apache.org/jira/browse/SPARK-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15053528#comment-15053528
]
Davies Liu commented on SPARK-11885:
The root cause is that we generate ExprId for Sc
[
https://issues.apache.org/jira/browse/SPARK-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-11885:
--
Assignee: Davies Liu (was: Yin Huai)
> UDAF may nondeterministically generate wrong results
>
[
https://issues.apache.org/jira/browse/SPARK-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11713.
Resolution: Fixed
Fix Version/s: 2.0.0
Issue resolved by pull request 10082
[https://github.
[
https://issues.apache.org/jira/browse/SPARK-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-12213:
--
Assignee: Davies Liu
> Query with only one distinct should not having on expand
>
[
https://issues.apache.org/jira/browse/SPARK-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-1.
Resolution: Fixed
Fix Version/s: 1.6.0
2.0.0
Issue resolved by pull reque
Davies Liu created SPARK-12213:
--
Summary: Query with only one distinct should not having on expand
Key: SPARK-12213
URL: https://issues.apache.org/jira/browse/SPARK-12213
Project: Spark
Issue Ty
[
https://issues.apache.org/jira/browse/SPARK-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15047193#comment-15047193
]
Davies Liu edited comment on SPARK-12179 at 12/8/15 6:24 PM:
-
[
https://issues.apache.org/jira/browse/SPARK-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15047193#comment-15047193
]
Davies Liu commented on SPARK-12179:
There are two direction to narrow down the probl
[
https://issues.apache.org/jira/browse/SPARK-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15047186#comment-15047186
]
Davies Liu commented on SPARK-12179:
Could you also test 1.6-RC1?
I'm just wondering
[
https://issues.apache.org/jira/browse/SPARK-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-12179:
---
Priority: Critical (was: Minor)
> Spark SQL get different result with the same code
> --
[
https://issues.apache.org/jira/browse/SPARK-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046014#comment-15046014
]
Davies Liu commented on SPARK-12179:
This may be related to https://issues.apache.org
[
https://issues.apache.org/jira/browse/SPARK-12132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-12132.
Resolution: Fixed
Fix Version/s: 1.6.0
2.0.0
Issue resolved by pull reque
[
https://issues.apache.org/jira/browse/SPARK-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-12032.
Resolution: Fixed
Fix Version/s: 2.0.0
Issue resolved by pull request 10073
[https://github.
[
https://issues.apache.org/jira/browse/SPARK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-12089.
Resolution: Fixed
Fix Version/s: 1.6.0
2.0.0
Issue resolved by pull reque
Davies Liu created SPARK-12132:
--
Summary: Cltr-C should clear current line in pyspark shell
Key: SPARK-12132
URL: https://issues.apache.org/jira/browse/SPARK-12132
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15037349#comment-15037349
]
Davies Liu commented on SPARK-12110:
[~aedwip] How do you launch the EC2 cluster usin
[
https://issues.apache.org/jira/browse/SPARK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-12089:
---
Summary: [][]][][]][[[ (was: java.lang.NegativeArraySizeException when
growing BufferHolder)
> [][]
[
https://issues.apache.org/jira/browse/SPARK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15036807#comment-15036807
]
Davies Liu commented on SPARK-12089:
This query will not generate huge record, each r
[
https://issues.apache.org/jira/browse/SPARK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15036295#comment-15036295
]
Davies Liu commented on SPARK-12089:
[~tyro89] Are you build a large Array using grou
[
https://issues.apache.org/jira/browse/SPARK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15036280#comment-15036280
]
Davies Liu commented on SPARK-12089:
Could you turn on debug log, and paste the java
[
https://issues.apache.org/jira/browse/SPARK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-12089:
---
Priority: Critical (was: Major)
> java.lang.NegativeArraySizeException when growing BufferHolder
> -
[
https://issues.apache.org/jira/browse/SPARK-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15036194#comment-15036194
]
Davies Liu commented on SPARK-12089:
Is it possible that you have a record larger tha
[
https://issues.apache.org/jira/browse/SPARK-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-12090.
Resolution: Fixed
Fix Version/s: 1.6.0
1.5.3
2.0.0
Iss
[
https://issues.apache.org/jira/browse/SPARK-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-12090:
--
Assignee: Davies Liu
> Coalesce does not consider shuffle in PySpark
> ---
Davies Liu created SPARK-12090:
--
Summary: Coalesce does not consider shuffle in PySpark
Key: SPARK-12090
URL: https://issues.apache.org/jira/browse/SPARK-12090
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15034635#comment-15034635
]
Davies Liu commented on SPARK-6830:
---
+1
> Memoize frequently queried vals in RDD, such
[
https://issues.apache.org/jira/browse/SPARK-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15034620#comment-15034620
]
Davies Liu commented on SPARK-12077:
https://github.com/apache/spark/pull/10075
> Us
Davies Liu created SPARK-12077:
--
Summary: Use more robust plan for single distinct aggregation
Key: SPARK-12077
URL: https://issues.apache.org/jira/browse/SPARK-12077
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15034615#comment-15034615
]
Davies Liu commented on SPARK-12030:
I also figured out the root cause last night, th
[
https://issues.apache.org/jira/browse/SPARK-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-12032:
--
Assignee: Davies Liu
> Filter can't be pushed down to correct Join because of bad order of Joi
[
https://issues.apache.org/jira/browse/SPARK-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032857#comment-15032857
]
Davies Liu commented on SPARK-12030:
[~smilegator] Could you post the related PRs her
[
https://issues.apache.org/jira/browse/SPARK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11982.
Resolution: Fixed
Fix Version/s: 2.0.0
Issue resolved by pull request 9969
[https://github.c
Davies Liu created SPARK-12054:
--
Summary: Consider nullable in codegen
Key: SPARK-12054
URL: https://issues.apache.org/jira/browse/SPARK-12054
Project: Spark
Issue Type: Improvement
Co
[
https://issues.apache.org/jira/browse/SPARK-11700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11700.
Resolution: Fixed
Fix Version/s: 1.6.0
Issue resolved by pull request 9990
[https://github.c
[
https://issues.apache.org/jira/browse/SPARK-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15032217#comment-15032217
]
Davies Liu commented on SPARK-12032:
[~marmbrus] Do you have some idea how to fix thi
Davies Liu created SPARK-12032:
--
Summary: Filter can't be pushed down to correct Join because of
bad order of Join
Key: SPARK-12032
URL: https://issues.apache.org/jira/browse/SPARK-12032
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-12028.
Resolution: Fixed
Fix Version/s: 1.6.0
2.0.0
Issue resolved by pull reque
[
https://issues.apache.org/jira/browse/SPARK-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11997.
Resolution: Fixed
Fix Version/s: 1.6.0
2.0.0
Issue resolved by pull reque
[
https://issues.apache.org/jira/browse/SPARK-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11973.
Resolution: Fixed
Fix Version/s: 1.6.0
> Filter pushdown does not work with aggregation with
[
https://issues.apache.org/jira/browse/SPARK-12003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-12003:
--
Assignee: Davies Liu
> Expanded star should use field name as column name
> -
[
https://issues.apache.org/jira/browse/SPARK-11700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-11700:
--
Assignee: Davies Liu (was: Shixiong Zhu)
> Memory leak at SparkContext jobProgressListener st
[
https://issues.apache.org/jira/browse/SPARK-12003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-12003.
Resolution: Fixed
Fix Version/s: 1.6.0
2.0.0
Issue resolved by pull reque
Davies Liu created SPARK-12003:
--
Summary: Expanded star should use field name as column name
Key: SPARK-12003
URL: https://issues.apache.org/jira/browse/SPARK-12003
Project: Spark
Issue Type: B
[
https://issues.apache.org/jira/browse/SPARK-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11997:
---
Priority: Blocker (was: Critical)
> NPE when save a DataFrame as parquet and partitioned by long col
[
https://issues.apache.org/jira/browse/SPARK-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027516#comment-15027516
]
Davies Liu commented on SPARK-11997:
It works well on 1.5
> NPE when save a DataFram
[
https://issues.apache.org/jira/browse/SPARK-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11997:
---
Priority: Critical (was: Major)
> NPE when save a DataFrame as parquet and partitioned by long colum
[
https://issues.apache.org/jira/browse/SPARK-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11997:
---
Affects Version/s: 1.6.0
> NPE when save a DataFrame as parquet and partitioned by long column
>
Davies Liu created SPARK-11997:
--
Summary: NPE when save a DataFrame as parquet and partitioned by
long column
Key: SPARK-11997
URL: https://issues.apache.org/jira/browse/SPARK-11997
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11969.
Resolution: Fixed
Fix Version/s: 1.6.0
2.0.0
Issue resolved by pull reque
[
https://issues.apache.org/jira/browse/SPARK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-11982:
--
Assignee: Davies Liu
> Improve performance of CartesianProduct
> -
Davies Liu created SPARK-11982:
--
Summary: Improve performance of CartesianProduct
Key: SPARK-11982
URL: https://issues.apache.org/jira/browse/SPARK-11982
Project: Spark
Issue Type: Improvement
Davies Liu created SPARK-11973:
--
Summary: Filter pushdown does not work with aggregation with alias
Key: SPARK-11973
URL: https://issues.apache.org/jira/browse/SPARK-11973
Project: Spark
Issue T
Davies Liu created SPARK-11969:
--
Summary: SQL UI does not work with PySpark
Key: SPARK-11969
URL: https://issues.apache.org/jira/browse/SPARK-11969
Project: Spark
Issue Type: Bug
Compo
[
https://issues.apache.org/jira/browse/SPARK-11836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-11836:
--
Assignee: Davies Liu
> Register a Python function creates a new SQLContext
> -
[
https://issues.apache.org/jira/browse/SPARK-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15022726#comment-15022726
]
Davies Liu commented on SPARK-10538:
I think we can re-open this once you find a way
[
https://issues.apache.org/jira/browse/SPARK-11700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018469#comment-15018469
]
Davies Liu commented on SPARK-11700:
So there is at most one SQLContext leak per thre
[
https://issues.apache.org/jira/browse/SPARK-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15018466#comment-15018466
]
Davies Liu commented on SPARK-10567:
I think so, [~matei] Could you confirm that?
>
Davies Liu created SPARK-11883:
--
Summary: New Parquet reader generate wrong result
Key: SPARK-11883
URL: https://issues.apache.org/jira/browse/SPARK-11883
Project: Spark
Issue Type: Bug
Davies Liu created SPARK-11864:
--
Summary: Improve performance of max/min
Key: SPARK-11864
URL: https://issues.apache.org/jira/browse/SPARK-11864
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu closed SPARK-11850.
--
Resolution: Not A Problem
> Spark StdDev/Variance defaults are incompatible with Hive
> ---
[
https://issues.apache.org/jira/browse/SPARK-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014142#comment-15014142
]
Davies Liu commented on SPARK-11850:
[~hvanhovell] This is on purpose, we had a long
[
https://issues.apache.org/jira/browse/SPARK-11855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014135#comment-15014135
]
Davies Liu commented on SPARK-11855:
cc [~marmbrus]
> Catalyst breaks backwards comp
[
https://issues.apache.org/jira/browse/SPARK-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-9604.
---
Resolution: Fixed
> Unsafe ArrayData and MapData is very very slow
> -
[
https://issues.apache.org/jira/browse/SPARK-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014128#comment-15014128
]
Davies Liu commented on SPARK-9271:
---
[~lian cheng] Is this still a problem?
> Concurren
[
https://issues.apache.org/jira/browse/SPARK-11851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11851:
---
Priority: Critical (was: Blocker)
> Unable to start spark thrift server against secured hive metasto
[
https://issues.apache.org/jira/browse/SPARK-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-9278:
--
Assignee: Cheng Lian
> DataFrameWriter.insertInto inserts incorrect data
> -
[
https://issues.apache.org/jira/browse/SPARK-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-10567.
Resolution: Fixed
Assignee: Matei Zaharia
Fix Version/s: 1.6.0
> Reducer locality f
[
https://issues.apache.org/jira/browse/SPARK-10567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014125#comment-15014125
]
Davies Liu commented on SPARK-10567:
Since https://issues.apache.org/jira/browse/SPAR
[
https://issues.apache.org/jira/browse/SPARK-11785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11785:
---
Priority: Critical (was: Blocker)
> When deployed against remote Hive metastore with lower versions,
[
https://issues.apache.org/jira/browse/SPARK-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014100#comment-15014100
]
Davies Liu commented on SPARK-9506:
---
[~cloud_fan] Could this be workaround by using a cu
[
https://issues.apache.org/jira/browse/SPARK-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-9506:
--
Priority: Major (was: Blocker)
> DataFrames Postgresql JDBC unable to support most of the Postgresql's
[
https://issues.apache.org/jira/browse/SPARK-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11783:
---
Priority: Critical (was: Blocker)
> When deployed against remote Hive metastore, HiveContext.executi
[
https://issues.apache.org/jira/browse/SPARK-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-9686:
--
Priority: Critical (was: Blocker)
> Spark Thrift server doesn't return correct JDBC metadata
> ---
[
https://issues.apache.org/jira/browse/SPARK-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014078#comment-15014078
]
Davies Liu commented on SPARK-11016:
Thanks!
> Spark fails when running with a task
[
https://issues.apache.org/jira/browse/SPARK-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014055#comment-15014055
]
Davies Liu commented on SPARK-11016:
[~sowen] I tried to assigned this to [~drcrallen
[
https://issues.apache.org/jira/browse/SPARK-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11016:
---
Assignee: (was: Davies Liu)
> Spark fails when running with a task that requires a more recent ve
[
https://issues.apache.org/jira/browse/SPARK-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11657.
Resolution: Fixed
> Bad Dataframe data read from parquet
>
>
>
[
https://issues.apache.org/jira/browse/SPARK-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11657:
---
Fix Version/s: 1.6.0
1.5.3
> Bad Dataframe data read from parquet
> --
[
https://issues.apache.org/jira/browse/SPARK-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-11657:
--
Assignee: Davies Liu
> Bad Dataframe data read from parquet
>
[
https://issues.apache.org/jira/browse/SPARK-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15012359#comment-15012359
]
Davies Liu commented on SPARK-11657:
[~virgilp] Are you using Kyro? it's may be relat
[
https://issues.apache.org/jira/browse/SPARK-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11804.
Resolution: Fixed
Fix Version/s: 1.6.0
Issue resolved by pull request 9791
[https://github.c
[
https://issues.apache.org/jira/browse/SPARK-11643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11643.
Resolution: Fixed
Fix Version/s: 1.6.0
Issue resolved by pull request 9701
[https://github.c
Davies Liu created SPARK-11805:
--
Summary: SpillableIterator should free the in-memory sorter while
spilling
Key: SPARK-11805
URL: https://issues.apache.org/jira/browse/SPARK-11805
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu updated SPARK-11737:
---
Fix Version/s: (was: 1.5.2)
1.5.3
> String may not be serialized correctly wit
[
https://issues.apache.org/jira/browse/SPARK-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11737.
Resolution: Fixed
Fix Version/s: 1.5.2
1.6.0
Issue resolved by pull reque
[
https://issues.apache.org/jira/browse/SPARK-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11583.
Resolution: Fixed
Fix Version/s: 1.6.0
Issue resolved by pull request 9746
[https://github.c
[
https://issues.apache.org/jira/browse/SPARK-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11016.
Resolution: Fixed
Issue resolved by pull request 9748
[https://github.com/apache/spark/pull/9748]
[
https://issues.apache.org/jira/browse/SPARK-11767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu resolved SPARK-11767.
Resolution: Fixed
Fix Version/s: 1.6.0
> Easy to OOM when cache large column
> -
Davies Liu created SPARK-11767:
--
Summary: Easy to OOM when cache large column
Key: SPARK-11767
URL: https://issues.apache.org/jira/browse/SPARK-11767
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-11767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reassigned SPARK-11767:
--
Assignee: Davies Liu
> Easy to OOM when cache large column
> -
[
https://issues.apache.org/jira/browse/SPARK-11271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu closed SPARK-11271.
--
Resolution: Duplicate
Assignee: (was: Liang-Chi Hsieh)
Fix Version/s: (was: 1.6.
[
https://issues.apache.org/jira/browse/SPARK-11271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Davies Liu reopened SPARK-11271:
https://github.com/apache/spark/pull/9243 is reverted
> MapStatus too large for driver
> -
1001 - 1100 of 2166 matches
Mail list logo