[
https://issues.apache.org/jira/browse/SPARK-5439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14294872#comment-14294872
]
Chengxiang Li commented on SPARK-5439:
--
I think the gap here is that, when launch a
Chengxiang Li created SPARK-5377:
Summary: Dynamically add jar into Spark Driver's classpath.
Key: SPARK-5377
URL: https://issues.apache.org/jira/browse/SPARK-5377
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14288783#comment-14288783
]
Chengxiang Li commented on SPARK-5377:
--
cc [~xuefuz], [~rxin], [~Grace Huang].
Chengxiang Li created SPARK-4955:
Summary: Executor does not get killed after configured interval.
Key: SPARK-4955
URL: https://issues.apache.org/jira/browse/SPARK-4955
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-4955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14258138#comment-14258138
]
Chengxiang Li commented on SPARK-4955:
--
I verified this feature with Hive on Spark,
[
https://issues.apache.org/jira/browse/SPARK-4955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14258139#comment-14258139
]
Chengxiang Li commented on SPARK-4955:
--
cc:[~andrewor14]
Executor does not get
Chengxiang Li created SPARK-4585:
Summary: Spark dynamic scaling executors use upper limit value as
default.
Key: SPARK-4585
URL: https://issues.apache.org/jira/browse/SPARK-4585
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143191#comment-14143191
]
Chengxiang Li commented on SPARK-2321:
--
I agree that a stable, immutable, and
[
https://issues.apache.org/jira/browse/SPARK-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135016#comment-14135016
]
Chengxiang Li commented on SPARK-3543:
--
I think this would solve SPARK-2895 as well,
[
https://issues.apache.org/jira/browse/SPARK-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121085#comment-14121085
]
Chengxiang Li commented on SPARK-2321:
--
I collect some hive side requirement here,
[
https://issues.apache.org/jira/browse/SPARK-2321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14121173#comment-14121173
]
Chengxiang Li commented on SPARK-2321:
--
I'm not sure whether i understand you right,
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Attachment: Spark listener enhancement for Hive on Spark job monitor and
statistic.docx
enhance
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Attachment: (was: Spark listener enhancement for Hive on Spark job
monitor and
[
https://issues.apache.org/jira/browse/SPARK-2895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2895:
-
Description:
This is a requirement from Hive on Spark, mapPartitionsWithContext only exists
in
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Attachment: (was: Spark listener enhancement for Hive on Spark job
monitor and
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Attachment: Spark listener enhancement for Hive on Spark job monitor and
statistic.docx
[
https://issues.apache.org/jira/browse/SPARK-2895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116221#comment-14116221
]
Chengxiang Li commented on SPARK-2895:
--
pull
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14113611#comment-14113611
]
Chengxiang Li commented on SPARK-2633:
--
It's quite subjective I think, like Hive on
Chengxiang Li created SPARK-3199:
Summary: native Java spark listener API support
Key: SPARK-3199
URL: https://issues.apache.org/jira/browse/SPARK-3199
Project: Spark
Issue Type: New Feature
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Summary: enhance spark listener API to gather more spark job information
(was: support register
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Description: Based on Hive on Spark job status monitoring and statistic
collection requirement,
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14108887#comment-14108887
]
Chengxiang Li commented on SPARK-2633:
--
I would start to work on this issue, for
[
https://issues.apache.org/jira/browse/SPARK-2895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14088895#comment-14088895
]
Chengxiang Li commented on SPARK-2895:
--
{quote}
Can we add the label hive to all the
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Labels: hive (was: )
support register spark listener to listener bus with Java API
[
https://issues.apache.org/jira/browse/SPARK-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2636:
-
Component/s: Java API
no where to get job identifier while submit spark job through spark API
[
https://issues.apache.org/jira/browse/SPARK-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2636:
-
Labels: hive (was: )
no where to get job identifier while submit spark job through spark API
Chengxiang Li created SPARK-2895:
Summary: Support mapPartitionsWithContext in Spark Java API
Key: SPARK-2895
URL: https://issues.apache.org/jira/browse/SPARK-2895
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-2895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14088736#comment-14088736
]
Chengxiang Li commented on SPARK-2895:
--
cc [~rxin] [~brocknoland] [~szehon]
Support
[
https://issues.apache.org/jira/browse/SPARK-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14087102#comment-14087102
]
Chengxiang Li commented on SPARK-2636:
--
{quote}
There are two ways I think. One is
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2633:
-
Attachment: Spark listener enhancement for Hive on Spark job monitor and
statistic.docx
I add a
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080425#comment-14080425
]
Chengxiang Li commented on SPARK-2633:
--
Thanks, [~vanzin]
{quote}
Registering
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071402#comment-14071402
]
Chengxiang Li commented on SPARK-2633:
--
For Hive job status monitor, spark listener
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071478#comment-14071478
]
Chengxiang Li commented on SPARK-2633:
--
add 2 more:
# StageInfo class is not well
Chengxiang Li created SPARK-2633:
Summary: support register spark listener to listener bus with Java
API
Key: SPARK-2633
URL: https://issues.apache.org/jira/browse/SPARK-2633
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-2633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071310#comment-14071310
]
Chengxiang Li commented on SPARK-2633:
--
cc [~rxin] [~xuefuz]
support register spark
Chengxiang Li created SPARK-2636:
Summary: no where to get job identifier while submit spark job
through spark API
Key: SPARK-2636
URL: https://issues.apache.org/jira/browse/SPARK-2636
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2636:
-
Description:
In Hive on Spark, we want to track spark job status through Spark API, the
basic
[
https://issues.apache.org/jira/browse/SPARK-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14071311#comment-14071311
]
Chengxiang Li commented on SPARK-2636:
--
cc [~rxin] [~xuefuz]
no where to get job
[
https://issues.apache.org/jira/browse/SPARK-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chengxiang Li updated SPARK-2636:
-
Description:
In Hive on Spark, we want to track spark job status through Spark API, the
basic
39 matches
Mail list logo