[
https://issues.apache.org/jira/browse/SPARK-30953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-30953:
-
Summary: InsertAdaptiveSparkPlan should apply AQE on child plan of write
commands (was:
wuyi created SPARK-31409:
Summary: Fix failed tests due to result order changing when we
enable AQE
Key: SPARK-31409
URL: https://issues.apache.org/jira/browse/SPARK-31409
Project: Spark
Issue
wuyi created SPARK-31407:
Summary: Fix hive/SQLQuerySuite.derived from Hive query file:
drop_database_removes_partition_dirs.q
Key: SPARK-31407
URL: https://issues.apache.org/jira/browse/SPARK-31407
Project:
wuyi created SPARK-31391:
Summary: Add AdaptiveTestUtils to ease the test of AQE
Key: SPARK-31391
URL: https://issues.apache.org/jira/browse/SPARK-31391
Project: Spark
Issue Type: Test
[
https://issues.apache.org/jira/browse/SPARK-31384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-31384:
-
Summary: NPE in OptimizeSkewedJoin when there's a inputRDD of plan has 0
partition (was: Fix NPE in
wuyi created SPARK-31384:
Summary: Fix NPE in OptimizeSkewedJoin
Key: SPARK-31384
URL: https://issues.apache.org/jira/browse/SPARK-31384
Project: Spark
Issue Type: Bug
Components: SQL
wuyi created SPARK-31379:
Summary: Fix flaky test:
o.a.s.scheduler.CoarseGrainedSchedulerBackendSuite.extra resources from executor
Key: SPARK-31379
URL: https://issues.apache.org/jira/browse/SPARK-31379
wuyi created SPARK-31344:
Summary: Polish implementation of barrier() and allGather()
Key: SPARK-31344
URL: https://issues.apache.org/jira/browse/SPARK-31344
Project: Spark
Issue Type: Improvement
wuyi created SPARK-31259:
Summary: Fix log error of curRequestSize in
ShuffleBlockFetcherIterator
Key: SPARK-31259
URL: https://issues.apache.org/jira/browse/SPARK-31259
Project: Spark
Issue Type:
wuyi created SPARK-31242:
Summary: Clone SparkSession should respect
spark.sql.legacy.sessionInitWithConfigDefaults
Key: SPARK-31242
URL: https://issues.apache.org/jira/browse/SPARK-31242
Project: Spark
wuyi created SPARK-31206:
Summary: AQE will use the same SubqueryExec even if
subqueryReuseEnabled=false
Key: SPARK-31206
URL: https://issues.apache.org/jira/browse/SPARK-31206
Project: Spark
Issue
wuyi created SPARK-31190:
Summary: ScalaReflection should erasure non user defined AnyVal
type
Key: SPARK-31190
URL: https://issues.apache.org/jira/browse/SPARK-31190
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-31175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-31175:
-
Summary: Avoid creating reverse comparator for each compare in
InterpretedOrdering (was: Avoid creating new
wuyi created SPARK-31175:
Summary: Avoid creating new reverse comparator per compare in
InterpretedOrdering
Key: SPARK-31175
URL: https://issues.apache.org/jira/browse/SPARK-31175
Project: Spark
wuyi created SPARK-31163:
Summary: acl/permission should handle non-existed path when
truncating table
Key: SPARK-31163
URL: https://issues.apache.org/jira/browse/SPARK-31163
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-31081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-31081:
-
Summary: Make the display of stageId/stageAttemptId/taskId of sql metrics
configurable in UI (was: Make
wuyi created SPARK-31082:
Summary: MapOutputTrackerMaster.getMapLocation can't handle last
mapIndex
Key: SPARK-31082
URL: https://issues.apache.org/jira/browse/SPARK-31082
Project: Spark
Issue
wuyi created SPARK-31081:
Summary: Make SQLMetrics more readable from UI
Key: SPARK-31081
URL: https://issues.apache.org/jira/browse/SPARK-31081
Project: Spark
Issue Type: Improvement
wuyi created SPARK-31054:
Summary: Turn on deprecation in Scala REPL/spark-shell by default
Key: SPARK-31054
URL: https://issues.apache.org/jira/browse/SPARK-31054
Project: Spark
Issue Type:
wuyi created SPARK-31052:
Summary: Fix flaky test of SPARK-30388
Key: SPARK-31052
URL: https://issues.apache.org/jira/browse/SPARK-31052
Project: Spark
Issue Type: Bug
Components: Spark
[
https://issues.apache.org/jira/browse/SPARK-31050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-31050:
-
Description: Disable flaky KafkaDelegationTokenSuite since it's too flaky.
> Disable flaky
[
https://issues.apache.org/jira/browse/SPARK-31050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-31050:
-
Environment: (was: Disable flaky KafkaDelegationTokenSuite since it's
too flaky.)
> Disable flaky
wuyi created SPARK-31050:
Summary: Disable flaky KafkaDelegationTokenSuite
Key: SPARK-31050
URL: https://issues.apache.org/jira/browse/SPARK-31050
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-30541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-30541:
-
Priority: Blocker (was: Major)
> Flaky test: org.apache.spark.sql.kafka010.KafkaDelegationTokenSuite
>
wuyi created SPARK-31034:
Summary: ShuffleBlockFetcherIterator may can't create request for
last group
Key: SPARK-31034
URL: https://issues.apache.org/jira/browse/SPARK-31034
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-31018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-31018:
-
Summary: Deprecate support of multiple workers on the same host in
Standalone (was: Deprecate support of
wuyi created SPARK-31018:
Summary: Deprecate support of multiple workers on the same host in
Standalone.
Key: SPARK-31018
URL: https://issues.apache.org/jira/browse/SPARK-31018
Project: Spark
Issue
wuyi created SPARK-31017:
Summary: Test for shuffle requests packaging with different size
and numBlocks limit
Key: SPARK-31017
URL: https://issues.apache.org/jira/browse/SPARK-31017
Project: Spark
wuyi created SPARK-30999:
Summary: Don't cancel a QueryStageExec when it's already finished
Key: SPARK-30999
URL: https://issues.apache.org/jira/browse/SPARK-30999
Project: Spark
Issue Type:
wuyi created SPARK-30972:
Summary: PruneHiveTablePartitions should be executed as
earlyScanPushDownRules
Key: SPARK-30972
URL: https://issues.apache.org/jira/browse/SPARK-30972
Project: Spark
Issue
wuyi created SPARK-30969:
Summary: Remove resource coordination support from Standalone
Key: SPARK-30969
URL: https://issues.apache.org/jira/browse/SPARK-30969
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-30947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-30947:
-
Summary: Log better message when accelerate resource is empty (was: Don't
log accelerate resources when it's
wuyi created SPARK-30953:
Summary: InsertAdaptiveSparkPlan should also skip v2 command
Key: SPARK-30953
URL: https://issues.apache.org/jira/browse/SPARK-30953
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30947:
Summary: Don't log accelerate resources when it's empty
Key: SPARK-30947
URL: https://issues.apache.org/jira/browse/SPARK-30947
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30937:
Summary: Migration guide for Hive 2.3
Key: SPARK-30937
URL: https://issues.apache.org/jira/browse/SPARK-30937
Project: Spark
Issue Type: Sub-task
Components: SQL
wuyi created SPARK-30903:
Summary: Fail fast on duplicate columns when analyze columns
Key: SPARK-30903
URL: https://issues.apache.org/jira/browse/SPARK-30903
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30863:
Summary: Distinguish Cast and AnsiCast in toString()
Key: SPARK-30863
URL: https://issues.apache.org/jira/browse/SPARK-30863
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30846:
Summary: Add AccumulatorV2 API in JavaSparkContext
Key: SPARK-30846
URL: https://issues.apache.org/jira/browse/SPARK-30846
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30844:
Summary: Static partition should also follow StoreAssignmentPolicy
when insert into table
Key: SPARK-30844
URL: https://issues.apache.org/jira/browse/SPARK-30844
Project: Spark
wuyi created SPARK-30812:
Summary: Revise boolean config name according to new config naming
policy
Key: SPARK-30812
URL: https://issues.apache.org/jira/browse/SPARK-30812
Project: Spark
Issue
wuyi created SPARK-30744:
Summary: Optimize AnalyzePartitionCommand by calculating location
sizes in parallel
Key: SPARK-30744
URL: https://issues.apache.org/jira/browse/SPARK-30744
Project: Spark
wuyi created SPARK-30729:
Summary: Eagerly filter out zombie TaskSetManager before offering
resources
Key: SPARK-30729
URL: https://issues.apache.org/jira/browse/SPARK-30729
Project: Spark
Issue
wuyi created SPARK-30594:
Summary: Do not post SparkListenerBlockUpdated when
updateBlockInfo returns false
Key: SPARK-30594
URL: https://issues.apache.org/jira/browse/SPARK-30594
Project: Spark
wuyi created SPARK-30578:
Summary: Explicitly set conf to use datasource v2 for
v2.3/OrcFilterSuite
Key: SPARK-30578
URL: https://issues.apache.org/jira/browse/SPARK-30578
Project: Spark
Issue
wuyi created SPARK-30518:
Summary: Precision and scale should be same for values between
-1.0 and 1.0 in Decimal
Key: SPARK-30518
URL: https://issues.apache.org/jira/browse/SPARK-30518
Project: Spark
wuyi created SPARK-30508:
Summary: Add DataFrameReader.executeCommand API for external
datasource
Key: SPARK-30508
URL: https://issues.apache.org/jira/browse/SPARK-30508
Project: Spark
Issue Type:
wuyi created SPARK-30459:
Summary: Fix ignoreMissingFiles/ignoreCorruptFiles in DSv2
Key: SPARK-30459
URL: https://issues.apache.org/jira/browse/SPARK-30459
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-30402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi resolved SPARK-30402.
--
Resolution: Won't Do
> Support push down for filters with cast expression
>
[
https://issues.apache.org/jira/browse/SPARK-30440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17009427#comment-17009427
]
wuyi commented on SPARK-30440:
--
[~kabhwan] thanks for reporting it. Let me take a look.
> Flaky test:
[
https://issues.apache.org/jira/browse/SPARK-30433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-30433:
-
Summary: Make conflict attributes resolution more scalable in
ResolveReferences (was: Make conflict attributes
wuyi created SPARK-30433:
Summary: Make conflict attributes resolution more scalable
Key: SPARK-30433
URL: https://issues.apache.org/jira/browse/SPARK-30433
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30402:
Summary: Support push down for filters with cast expression
Key: SPARK-30402
URL: https://issues.apache.org/jira/browse/SPARK-30402
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-30385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-30385:
-
Description:
While using ./bin/spark-shell, recently, I occasionally see IOException when I
try to quit:
[
https://issues.apache.org/jira/browse/SPARK-30385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-30385:
-
Description:
While using ./bin/spark-shell, recently, I occasionally see IOException when I
try to quit:
wuyi created SPARK-30385:
Summary: WebUI occasionally throw IOException on stop()
Key: SPARK-30385
URL: https://issues.apache.org/jira/browse/SPARK-30385
Project: Spark
Issue Type: Bug
wuyi created SPARK-30373:
Summary: Avoid unnecessary sort for ParquetUtils.splitFiles
Key: SPARK-30373
URL: https://issues.apache.org/jira/browse/SPARK-30373
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30359:
Summary: Do not clear executorsPendingToRemove in
CoarseGrainedSchedulerBackend.reset
Key: SPARK-30359
URL: https://issues.apache.org/jira/browse/SPARK-30359
Project: Spark
wuyi created SPARK-30355:
Summary: Unify isExecutorActive between
CoarseGrainedSchedulerBackend and DriverEndpoint
Key: SPARK-30355
URL: https://issues.apache.org/jira/browse/SPARK-30355
Project: Spark
wuyi created SPARK-30252:
Summary: Disallow negative scale of Decimal under ansi mode
Key: SPARK-30252
URL: https://issues.apache.org/jira/browse/SPARK-30252
Project: Spark
Issue Type: Improvement
wuyi created SPARK-30151:
Summary: Issue better error message when user-specified schema not
match relation schema
Key: SPARK-30151
URL: https://issues.apache.org/jira/browse/SPARK-30151
Project: Spark
wuyi created SPARK-30098:
Summary: Use default datasource as provider for CREATE TABLE syntax
Key: SPARK-30098
URL: https://issues.apache.org/jira/browse/SPARK-30098
Project: Spark
Issue Type:
wuyi created SPARK-29956:
Summary: A literal number with an exponent should be converted
into Double
Key: SPARK-29956
URL: https://issues.apache.org/jira/browse/SPARK-29956
Project: Spark
Issue
wuyi created SPARK-29871:
Summary: Flaky test: ImageFileFormatTest.test_read_images
Key: SPARK-29871
URL: https://issues.apache.org/jira/browse/SPARK-29871
Project: Spark
Issue Type: Test
[
https://issues.apache.org/jira/browse/SPARK-29838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16971570#comment-16971570
]
wuyi commented on SPARK-29838:
--
Hey guys, what's going on here ? I see [~aman_omer] you has commented
[
https://issues.apache.org/jira/browse/SPARK-29837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-29837:
-
Parent: SPARK-29836
Issue Type: Sub-task (was: Task)
> PostgreSQL dialect: cast to boolean
>
wuyi created SPARK-29837:
Summary: PostgreSQL dialect: cast to boolean
Key: SPARK-29837
URL: https://issues.apache.org/jira/browse/SPARK-29837
Project: Spark
Issue Type: Task
Components:
wuyi created SPARK-29836:
Summary: PostgreSQL dialect: cast
Key: SPARK-29836
URL: https://issues.apache.org/jira/browse/SPARK-29836
Project: Spark
Issue Type: Improvement
Components: SQL
wuyi created SPARK-29537:
Summary: throw exception when user defined a wrong base path
Key: SPARK-29537
URL: https://issues.apache.org/jira/browse/SPARK-29537
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-28867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16950307#comment-16950307
]
wuyi commented on SPARK-28867:
--
[~irashid] Thanks for putting this as a related issue to SPARK-20656.
wuyi created SPARK-29261:
Summary: Support recover live entities from KVStore for
(SQL)AppStatusListener
Key: SPARK-29261
URL: https://issues.apache.org/jira/browse/SPARK-29261
Project: Spark
Issue
wuyi created SPARK-28867:
Summary: InMemoryStore checkpoint to speed up replay log file in
HistoryServer
Key: SPARK-28867
URL: https://issues.apache.org/jira/browse/SPARK-28867
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-27492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903081#comment-16903081
]
wuyi commented on SPARK-27492:
--
I'm wondering would it be possible or better if we could use accelerator
[
https://issues.apache.org/jira/browse/SPARK-28302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-28302:
-
Description:
When using SparkLauncher to submit applications concurrently with a thread pool
under *Windows*,
[
https://issues.apache.org/jira/browse/SPARK-28302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-28302:
-
Attachment: Main.scala
> SparkLauncher: The process cannot access the file because it is being used by
>
[
https://issues.apache.org/jira/browse/SPARK-28302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-28302:
-
Description:
When using SparkLauncher to submit applications concurrently with a thread pool
under *Windows*,
wuyi created SPARK-28302:
Summary: SparkLauncher: The process cannot access the file because
it is being used by another process
Key: SPARK-28302
URL: https://issues.apache.org/jira/browse/SPARK-28302
[
https://issues.apache.org/jira/browse/SPARK-27360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16860951#comment-16860951
]
wuyi commented on SPARK-27360:
--
oops...I happened to find that sub-task 2 has changed to what I want after
wuyi created SPARK-27999:
Summary: setup resources when Standalone Worker starts up
Key: SPARK-27999
URL: https://issues.apache.org/jira/browse/SPARK-27999
Project: Spark
Issue Type: Sub-task
[
https://issues.apache.org/jira/browse/SPARK-27666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-27666:
-
Description:
{code:java}
Exception in thread "Thread-14" java.lang.AssertionError: assertion failed:
Block
[
https://issues.apache.org/jira/browse/SPARK-27666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-27666:
-
Description:
We're facing an issue reported by SPARK-18406 and SPARK-25139. And
[
https://issues.apache.org/jira/browse/SPARK-27666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-27666:
-
Description:
We're facing an issue reported by SPARK-18406 and SPARK-25139. And
[
https://issues.apache.org/jira/browse/SPARK-27666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-27666:
-
Summary: Do not release lock while TaskContext already completed (was:
Stop python runner threads when task
[
https://issues.apache.org/jira/browse/SPARK-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16838237#comment-16838237
]
wuyi commented on SPARK-23191:
--
Hi [~neeraj20gupta] Can you explain more about the part of _running
[
https://issues.apache.org/jira/browse/SPARK-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834703#comment-16834703
]
wuyi commented on SPARK-23191:
--
Hi [~neeraj20gupta]
What do you mean by _multiple driver running in some
[
https://issues.apache.org/jira/browse/SPARK-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16829272#comment-16829272
]
wuyi commented on SPARK-23191:
--
[~cloud_fan] Ok, I'll have a deep look after 5.1 holiday.
> Workers
wuyi created SPARK-27568:
Summary: readLock leaked when method take() called on a cached rdd
Key: SPARK-27568
URL: https://issues.apache.org/jira/browse/SPARK-27568
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-27520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821916#comment-16821916
]
wuyi commented on SPARK-27520:
--
[~jiangxb1987] okay, let me try it. thanks.
> Introduce a global config
wuyi created SPARK-27510:
Summary: Master fall into dead loop while launching executor
failed in Worker
Key: SPARK-27510
URL: https://issues.apache.org/jira/browse/SPARK-27510
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-27193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-27193:
-
Description:
when enable `spark.sql.codegen.comments`, there will be multiple comment
lines. However,
wuyi created SPARK-27193:
Summary: CodeFormatter should format multi comment lines correctly
Key: SPARK-27193
URL: https://issues.apache.org/jira/browse/SPARK-27193
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778048#comment-16778048
]
wuyi commented on SPARK-26927:
--
I got it, thank you.
> Race condition may cause dynamic allocation not
[
https://issues.apache.org/jira/browse/SPARK-26927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776309#comment-16776309
]
wuyi commented on SPARK-26927:
--
[~liupengcheng] I can not understand the issue clearly by your desc. Can
[
https://issues.apache.org/jira/browse/SPARK-26814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi resolved SPARK-26814.
--
Resolution: Won't Fix
> Avoid unnecessary block concatenation while empty block exists
>
wuyi created SPARK-26814:
Summary: Avoid unnecessary block concatenation while empty block
exists
Key: SPARK-26814
URL: https://issues.apache.org/jira/browse/SPARK-26814
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-26730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-26730:
-
Description:
For types like Product, we've already add AssertNotNull when we construct
serializer(pls see the
[
https://issues.apache.org/jira/browse/SPARK-26730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-26730:
-
Description:
For types like Product, we've already add AssertNotNull when we construct
serializer(pls see the
wuyi created SPARK-26730:
Summary: Strip redundant AssertNotNull expression for
ExpressionEncoder's serializer
Key: SPARK-26730
URL: https://issues.apache.org/jira/browse/SPARK-26730
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-26439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16736687#comment-16736687
]
wuyi commented on SPARK-26439:
--
[~srowen] ok.
> Introduce WorkerOffer reservation mechanism for Barrier
[
https://issues.apache.org/jira/browse/SPARK-26439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-26439:
-
Description:
Currently, Barrier TaskSet has a hard requirement that tasks can only be
launched
in a single
[
https://issues.apache.org/jira/browse/SPARK-26439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wuyi updated SPARK-26439:
-
Summary: Introduce WorkerOffer reservation mechanism for Barrier TaskSet
(was: Introduce WorkOffer reservation
301 - 400 of 431 matches
Mail list logo