[
https://issues.apache.org/jira/browse/SPARK-22233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205498#comment-16205498
]
Apache Spark commented on SPARK-22233:
--
User 'jiangxb1987' has created a pull reques
[
https://issues.apache.org/jira/browse/SPARK-20396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205499#comment-16205499
]
Apache Spark commented on SPARK-20396:
--
User 'ueshin' has created a pull request for
[
https://issues.apache.org/jira/browse/SPARK-22272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205535#comment-16205535
]
Sean Owen commented on SPARK-22272:
---
OK fair enough, though I don't know if we'll have
[
https://issues.apache.org/jira/browse/SPARK-22272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen updated SPARK-22272:
--
Affects Version/s: (was: 2.1.1)
2.1.2
> killing task may cause the executor
Ben created SPARK-22284:
---
Summary: Code of class
\"org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection\"
grows beyond 64 KB
Key: SPARK-22284
URL: https://issues.apache.org/jira/browse/SPARK-22
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ben updated SPARK-22284:
Description:
I am using pySpark 2.1.0 in a production environment, and trying to join two
DataFrames, one of which
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ben updated SPARK-22284:
Description:
I am using pySpark 2.1.0 in a production environment, and trying to join two
DataFrames, one of which
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ben updated SPARK-22284:
Component/s: SQL
> Code of class
> \"org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjec
Zhenhua Wang created SPARK-22285:
Summary: Change implementation of ApproxCountDistinctForIntervals
to TypedImperativeAggregate
Key: SPARK-22285
URL: https://issues.apache.org/jira/browse/SPARK-22285
[
https://issues.apache.org/jira/browse/SPARK-22285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Zhenhua Wang updated SPARK-22285:
-
Description:
The current implementation of `ApproxCountDistinctForIntervals` is
`ImperativeAggre
[
https://issues.apache.org/jira/browse/SPARK-22285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22285:
Assignee: Apache Spark
> Change implementation of ApproxCountDistinctForIntervals to
> Ty
[
https://issues.apache.org/jira/browse/SPARK-22285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22285:
Assignee: (was: Apache Spark)
> Change implementation of ApproxCountDistinctForInterva
[
https://issues.apache.org/jira/browse/SPARK-22285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205599#comment-16205599
]
Apache Spark commented on SPARK-22285:
--
User 'wzhfy' has created a pull request for
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205600#comment-16205600
]
Sean Owen commented on SPARK-22284:
---
Without more details, this could be a duplicate of
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205601#comment-16205601
]
Ben commented on SPARK-22284:
-
If I upgrade to 2.2.0 then I, for myself at least, would not n
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205601#comment-16205601
]
Ben edited comment on SPARK-22284 at 10/16/17 9:10 AM:
---
If I upgrad
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205604#comment-16205604
]
Sean Owen commented on SPARK-22284:
---
What you've described is a symptom with many cause
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ben updated SPARK-22284:
Description:
I am using pySpark 2.1.0 in a production environment, and trying to join two
DataFrames, one of which
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205608#comment-16205608
]
Ben commented on SPARK-22284:
-
I did try adding
{code:java}
--conf "spark.sql.codegen.wholeS
Lijie Xu created SPARK-22286:
Summary: OutOfMemoryError caused by memory leak and large
serializer batch size in ExternalAppendOnlyMap
Key: SPARK-22286
URL: https://issues.apache.org/jira/browse/SPARK-22286
[
https://issues.apache.org/jira/browse/SPARK-22286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lijie Xu updated SPARK-22286:
-
Description:
*[Abstract]*
I recently encountered an OOM error in a simple _groupByKey_ application. Afte
[
https://issues.apache.org/jira/browse/SPARK-22286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lijie Xu updated SPARK-22286:
-
Description:
*[Abstract]*
I recently encountered an OOM error in a simple _groupByKey_ application. Aft
[
https://issues.apache.org/jira/browse/SPARK-22286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lijie Xu updated SPARK-22286:
-
Description:
*[Abstract]*
I recently encountered an OOM error in a simple _groupByKey_ application. Aft
[
https://issues.apache.org/jira/browse/SPARK-22286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lijie Xu updated SPARK-22286:
-
Description:
*[Abstract]*
I recently encountered an OOM error in a simple _groupByKey_ application. Aft
[
https://issues.apache.org/jira/browse/SPARK-22276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Fernando Pereira updated SPARK-22276:
-
Description:
When a dataframe is sorted it is partitioned with a RangePartitioner.
If lat
[
https://issues.apache.org/jira/browse/SPARK-22276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205655#comment-16205655
]
Fernando Pereira commented on SPARK-22276:
--
I added a simple example that shows
[
https://issues.apache.org/jira/browse/SPARK-22247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205737#comment-16205737
]
Patrick Duin commented on SPARK-22247:
--
Spark 2.3.0 hasn't been released so I am str
[
https://issues.apache.org/jira/browse/SPARK-22247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Patrick Duin resolved SPARK-22247.
--
Resolution: Duplicate
Fix Version/s: 2.3.0
> Hive partition filter very slow
> -
[
https://issues.apache.org/jira/browse/SPARK-22276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205794#comment-16205794
]
Fernando Pereira commented on SPARK-22276:
--
I made some more tests and this beha
[
https://issues.apache.org/jira/browse/SPARK-22276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205794#comment-16205794
]
Fernando Pereira edited comment on SPARK-22276 at 10/16/17 12:07 PM:
--
[
https://issues.apache.org/jira/browse/SPARK-20783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205912#comment-16205912
]
Apache Spark commented on SPARK-20783:
--
User 'viirya' has created a pull request for
[
https://issues.apache.org/jira/browse/SPARK-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206111#comment-16206111
]
Timothy Hunter commented on SPARK-8515:
---
Before we commit to an implementation, we s
[
https://issues.apache.org/jira/browse/SPARK-22287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
paul mackles updated SPARK-22287:
-
Summary: SPARK_DAEMON_MEMORY not honored by MesosClusterDispatcher (was:
[MESOS] SPARK_DAEMON_ME
paul mackles created SPARK-22287:
Summary: [MESOS] SPARK_DAEMON_MEMORY not honored by
MesosClusterDispatcher
Key: SPARK-22287
URL: https://issues.apache.org/jira/browse/SPARK-22287
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-22231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206143#comment-16206143
]
Jeremy Smith commented on SPARK-22231:
--
[~viirya] - It is confusing that the lambda
[
https://issues.apache.org/jira/browse/SPARK-22282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Li resolved SPARK-22282.
-
Resolution: Fixed
Assignee: Dongjoon Hyun
Fix Version/s: 2.3.0
> Rename OrcRelation to Or
Ryan Williams created SPARK-22288:
-
Summary: Tricky interaction between closure-serialization and
inheritance results in confusing failure
Key: SPARK-22288
URL: https://issues.apache.org/jira/browse/SPARK-22288
Nic Eggert created SPARK-22289:
--
Summary: Cannot save LogisticRegressionClassificationModel with
bounds on coefficients
Key: SPARK-22289
URL: https://issues.apache.org/jira/browse/SPARK-22289
Project: Sp
[
https://issues.apache.org/jira/browse/SPARK-22276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206745#comment-16206745
]
Liang-Chi Hsieh commented on SPARK-22276:
-
I think this issue is already resolved
[
https://issues.apache.org/jira/browse/SPARK-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Li resolved SPARK-22280.
-
Resolution: Fixed
Assignee: Dongjoon Hyun
Fix Version/s: 2.3.0
> Improve StatisticsSuite
[
https://issues.apache.org/jira/browse/SPARK-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206770#comment-16206770
]
Liang-Chi Hsieh commented on SPARK-8515:
I'm not sure if SPARK-2008 is related to
[
https://issues.apache.org/jira/browse/SPARK-8515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206770#comment-16206770
]
Liang-Chi Hsieh edited comment on SPARK-8515 at 10/16/17 11:28 PM:
-
Marcelo Vanzin created SPARK-22290:
--
Summary: Starting second context in same JVM fails to get new Hive
delegation token
Key: SPARK-22290
URL: https://issues.apache.org/jira/browse/SPARK-22290
Projec
[
https://issues.apache.org/jira/browse/SPARK-18627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Marcelo Vanzin resolved SPARK-18627.
Resolution: Cannot Reproduce
This seems to be working with 2.3.
> Cannot connect to Hive m
[
https://issues.apache.org/jira/browse/SPARK-22290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206814#comment-16206814
]
Apache Spark commented on SPARK-22290:
--
User 'vanzin' has created a pull request for
[
https://issues.apache.org/jira/browse/SPARK-22290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22290:
Assignee: Apache Spark
> Starting second context in same JVM fails to get new Hive delegat
[
https://issues.apache.org/jira/browse/SPARK-22290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22290:
Assignee: (was: Apache Spark)
> Starting second context in same JVM fails to get new H
[
https://issues.apache.org/jira/browse/SPARK-22286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lijie Xu updated SPARK-22286:
-
Description:
*[Abstract]*
I recently encountered an OOM error in a simple _groupByKey_ application. Aft
Fabio J. Walter created SPARK-22291:
---
Summary: Postgresql UUID[] to Cassandra: Conversion Error
Key: SPARK-22291
URL: https://issues.apache.org/jira/browse/SPARK-22291
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-22291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Fabio J. Walter updated SPARK-22291:
Attachment: org_apache_spark_sql_execution_datasources_jdbc_JdbcUtil.png
> Postgresql UUID[
[
https://issues.apache.org/jira/browse/SPARK-22283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206988#comment-16206988
]
Jen-Ming Chung commented on SPARK-22283:
Hi [~kitbellew],
I found the `withColumn
[
https://issues.apache.org/jira/browse/SPARK-22283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206988#comment-16206988
]
Jen-Ming Chung edited comment on SPARK-22283 at 10/17/17 4:45 AM:
-
[
https://issues.apache.org/jira/browse/SPARK-22284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206994#comment-16206994
]
Hyukjin Kwon commented on SPARK-22284:
--
[~someonehere15], would you maybe able to pr
[
https://issues.apache.org/jira/browse/SPARK-22276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon resolved SPARK-22276.
--
Resolution: Duplicate
> Unnecessary repartitioning
> --
>
>
windkithk created SPARK-22292:
-
Summary: Add spark.mem.max to limit the amount of memory received
from Mesos
Key: SPARK-22292
URL: https://issues.apache.org/jira/browse/SPARK-22292
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-22292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
windkithk updated SPARK-22292:
--
Issue Type: Improvement (was: Bug)
> Add spark.mem.max to limit the amount of memory received from Mes
[
https://issues.apache.org/jira/browse/SPARK-22267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dongjoon Hyun updated SPARK-22267:
--
Description:
For a long time, Apache Spark SQL returns incorrect results when ORC file
schema
[
https://issues.apache.org/jira/browse/SPARK-22267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dongjoon Hyun updated SPARK-22267:
--
Description:
For a long time, Apache Spark SQL returns incorrect results when ORC file
schema
[
https://issues.apache.org/jira/browse/SPARK-22292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207060#comment-16207060
]
Apache Spark commented on SPARK-22292:
--
User 'windkit' has created a pull request fo
[
https://issues.apache.org/jira/browse/SPARK-22292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22292:
Assignee: (was: Apache Spark)
> Add spark.mem.max to limit the amount of memory receiv
[
https://issues.apache.org/jira/browse/SPARK-22292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22292:
Assignee: Apache Spark
> Add spark.mem.max to limit the amount of memory received from Mes
[
https://issues.apache.org/jira/browse/SPARK-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207063#comment-16207063
]
yuhao yang commented on SPARK-22289:
Thanks for reporting the issue. Should be a stra
[
https://issues.apache.org/jira/browse/SPARK-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207063#comment-16207063
]
yuhao yang edited comment on SPARK-22289 at 10/17/17 6:28 AM:
-
[
https://issues.apache.org/jira/browse/SPARK-22289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207063#comment-16207063
]
yuhao yang edited comment on SPARK-22289 at 10/17/17 6:43 AM:
-
Xianyang Liu created SPARK-22293:
Summary: Avoid unnecessary traversal in ResolveReferences
Key: SPARK-22293
URL: https://issues.apache.org/jira/browse/SPARK-22293
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-22293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22293:
Assignee: Apache Spark
> Avoid unnecessary traversal in ResolveReferences
> --
[
https://issues.apache.org/jira/browse/SPARK-22293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207093#comment-16207093
]
Apache Spark commented on SPARK-22293:
--
User 'ConeyLiu' has created a pull request f
[
https://issues.apache.org/jira/browse/SPARK-22293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Apache Spark reassigned SPARK-22293:
Assignee: (was: Apache Spark)
> Avoid unnecessary traversal in ResolveReferences
> ---
68 matches
Mail list logo