echobravopapa commented on a change in pull request #5962:
URL: https://github.com/apache/geode/pull/5962#discussion_r566344530
##########
File path:
geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java
##########
@@ -137,7 +142,7 @@ public DirectChannel(Membership<InternalDistributedMember>
mgr,
props.setProperty("membership_port_range_start", "" + range[0]);
props.setProperty("membership_port_range_end", "" + range[1]);
- this.conduit = new TCPConduit(mgr, port, address, isBindAddress, this,
props);
+ this.conduit = new TCPConduit(mgr, port, address, isBindAddress, this,
bufferPool, props);
Review comment:
i.e. providing `bufferPool` directly to `TCPConduit`...
##########
File path:
geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java
##########
@@ -112,6 +115,8 @@ public DirectChannel(Membership<InternalDistributedMember>
mgr,
throws ConnectionException {
this.receiver = listener;
this.dm = dm;
+ this.stats = dm.getStats();
+ this.bufferPool = new BufferPool(stats);
Review comment:
now `DirectChannel` owns `BufferPool` and makes the pool accessible as
needed...
##########
File path:
geode-core/src/main/java/org/apache/geode/internal/tcp/ConnectionTable.java
##########
@@ -207,7 +207,7 @@ private ConnectionTable(TCPConduit conduit) {
threadConnectionMap = new ConcurrentHashMap();
p2pReaderThreadPool =
createThreadPoolForIO(conduit.getDM().getSystem().isShareSockets());
socketCloser = new SocketCloser();
- bufferPool = new BufferPool(owner.getStats());
+ bufferPool = conduit.getBufferPool();
Review comment:
`ConnectionTable` formerly owned `BufferPool`
##########
File path:
geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java
##########
@@ -295,7 +307,7 @@ private int sendToMany(final Membership mgr,
List<?> sentCons; // used for cons we sent to this time
final BaseMsgStreamer ms =
- MsgStreamer.create(cons, msg, directReply, stats,
getConduit().getBufferPool());
+ MsgStreamer.create(cons, msg, directReply, stats, bufferPool);
Review comment:
DC owning `bufferPool` looks to make even more sense here
##########
File path:
geode-core/src/test/java/org/apache/geode/internal/tcp/TCPConduitTest.java
##########
@@ -72,11 +73,24 @@ public void setUp() throws Exception {
.thenReturn(mock(DistributionManager.class));
}
+ @Test
+ public void closedConduitDoesNotThrowNPEWhenAskedForBufferPool() {
+ directChannel.getDM(); // Mockito demands that this mock be used in this
test
+ TCPConduit tcpConduit =
+ new TCPConduit(membership, 0, localHost, false, directChannel,
mock(BufferPool.class),
+ new Properties(),
+ TCPConduit -> connectionTable, socketCreator, doNothing(), false);
+ InternalDistributedMember member = mock(InternalDistributedMember.class);
+ tcpConduit.stop(null);
+ assertThat(tcpConduit.getBufferPool()).isNotNull();
Review comment:
and now the `tcpConduit` is constructed with a `BufferPool` this won't
ever be NULL
##########
File path:
geode-core/src/main/java/org/apache/geode/internal/tcp/ConnectionTable.java
##########
@@ -207,7 +207,7 @@ private ConnectionTable(TCPConduit conduit) {
threadConnectionMap = new ConcurrentHashMap();
p2pReaderThreadPool =
createThreadPoolForIO(conduit.getDM().getSystem().isShareSockets());
socketCloser = new SocketCloser();
- bufferPool = new BufferPool(owner.getStats());
+ bufferPool = conduit.getBufferPool();
Review comment:
but this was not guaranteed to be available...
##########
File path:
geode-core/src/main/java/org/apache/geode/distributed/internal/direct/DirectChannel.java
##########
@@ -112,6 +115,8 @@ public DirectChannel(Membership<InternalDistributedMember>
mgr,
throws ConnectionException {
this.receiver = listener;
this.dm = dm;
+ this.stats = dm.getStats();
+ this.bufferPool = new BufferPool(stats);
Review comment:
and same with `DMStats`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]