sodonnel commented on a change in pull request #2973:
URL: https://github.com/apache/ozone/pull/2973#discussion_r782052188
##########
File path:
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ExcludeList.java
##########
@@ -22,43 +22,67 @@
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.scm.container.ContainerID;
import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.util.Daemon;
import java.util.Collection;
+import java.util.HashMap;
import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
import java.util.Set;
import java.util.UUID;
+import static org.apache.hadoop.util.Time.monotonicNow;
+
/**
* This class contains set of dns and containers which ozone client provides
* to be handed over to SCM when block allocation request comes.
*/
public class ExcludeList {
- private final Set<DatanodeDetails> datanodes;
+ private final Map<DatanodeDetails, Long> datanodes;
private final Set<ContainerID> containerIds;
private final Set<PipelineID> pipelineIds;
+ private Daemon excludeNodesCleaner;
+ private boolean autoCleanerRunning;
public ExcludeList() {
- datanodes = new HashSet<>();
+ datanodes = new HashMap<>();
containerIds = new HashSet<>();
pipelineIds = new HashSet<>();
}
+ public void startAutoExcludeNodesCleaner(long expiryTime,
+ long recheckInterval) {
+ excludeNodesCleaner =
+ new Daemon(new ExcludeNodesCleaner(expiryTime, recheckInterval));
+ excludeNodesCleaner.start();
+ }
+
+ public void stopAutoExcludeNodesCleaner() {
+ if (excludeNodesCleaner != null) {
+ autoCleanerRunning = false;
+ excludeNodesCleaner.interrupt();
+ }
+ }
+
public Set<ContainerID> getContainerIds() {
return containerIds;
}
public Set<DatanodeDetails> getDatanodes() {
- return datanodes;
+ return datanodes.keySet();
Review comment:
From the Map docs:
> Returns a Set view of the keys contained in this map. The set is backed by
the map, so changes to the map are reflected in the set, and vice-versa. If the
map is modified while an iteration over the set is in progress (except through
the iterator's own remove operation), the results of the iteration are
undefined.
I wonder if it is safe to just return the keySet here, which could get
modified concurrently while it is being used elsewhere. It might be safer to
construct a new Set from the keySet, and return that, so it stands alone.
If we do do that, then I wonder if we could simplify this entire change, and
just remove the exipred nodes from the list when it is requested, eg:
```
Set<DatanodeDetails> nodes = new HashSet<>();
Iterator it = datanodes.entrySet().iterator();
while (it.hasNext()) {
Entry<> e = it.next()
if (e.getValue() > timeout) {
it.remove();
} else {
nodes.add(e.getKey());
}
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]