advancedxy commented on code in PR #424:
URL: https://github.com/apache/incubator-uniffle/pull/424#discussion_r1050362964
##########
integration-test/common/src/test/java/org/apache/uniffle/test/DiskErrorToleranceTest.java:
##########
@@ -87,8 +86,7 @@ public void createClient() {
public void closeClient() {
shuffleServerClient.close();
}
-
- @Test
+
public void diskErrorTest() throws Exception {
Review Comment:
why the `@Test` is removed?
##########
server/src/main/java/org/apache/uniffle/server/ShuffleDataReadEvent.java:
##########
@@ -17,16 +17,20 @@
package org.apache.uniffle.server;
+import org.apache.uniffle.common.PartitionRange;
+
public class ShuffleDataReadEvent {
private String appId;
private int shuffleId;
- private int startPartition;
+ private int partitionId;
+ private PartitionRange partitionRange;
- public ShuffleDataReadEvent(String appId, int shuffleId, int startPartition)
{
+ public ShuffleDataReadEvent(String appId, int shuffleId, int partitionId,
int[] range) {
Review Comment:
why is this change?
##########
server/src/main/java/org/apache/uniffle/server/storage/LocalStorageManager.java:
##########
@@ -69,8 +71,8 @@ public class LocalStorageManager extends SingleStorageManager
{
private final List<LocalStorage> localStorages;
private final List<String> storageBasePaths;
private final LocalStorageChecker checker;
- private List<LocalStorage> unCorruptedStorages = Lists.newArrayList();
- private final Set<String> corruptedStorages = Sets.newConcurrentHashSet();
+
+ private final Map<PartitionUnionKey, LocalStorage> partitionsOfStorage;
Review Comment:
> From our dashboard, there are only a few thousand partitions running at
the same time
For large spark apps, it's common to have ~10K shuffle partitions and let's
just one app.
However maybe we have not reached this kind of scale.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]