balodesecurity opened a new pull request, #8328:
URL: https://github.com/apache/hadoop/pull/8328
## Summary
`DFSStripedInputStream.createBlockReader()` initialises `dnInfo` to a
sentinel `DNAddrPair(null, null, null, null)` before entering the retry loop.
If `refreshLocatedBlock()` throws an `IOException` (e.g. the block's start
offset is out of range after a file truncation or stale cache) _before_
`getBestNodeDNAddrPair()` is called, the catch block tries:
```java
addToLocalDeadNodes(dnInfo.info); // dnInfo.info is still null
```
`addToLocalDeadNodes` calls `deadNodes.put(null, null)`, and
`ConcurrentHashMap` does not permit null keys, so a `NullPointerException` is
thrown — masking the original `IOException`.
**Fix:** add a null guard at the top of
`DFSInputStream.addToLocalDeadNodes()`:
```java
protected void addToLocalDeadNodes(DatanodeInfo dnInfo) {
if (dnInfo == null) {
return;
}
...
}
```
This is the safest fix location because it protects all callers, not just
the one in `createBlockReader`.
## Test plan
- [x] New test
`TestDFSStripedInputStream#testAddNullToLocalDeadNodesIsIgnored` creates a
striped file, opens a `DFSStripedInputStream`, calls
`addToLocalDeadNodes(null)`, and asserts: no exception thrown and dead-nodes
map remains empty.
- [x] Test passes with MiniDFSCluster (EC RS-6-3 policy).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]