pphust commented on a change in pull request #8323: Make sure 
ReferenceCountingSegment.decrement() is invoked correctly when useCache=true
URL: https://github.com/apache/incubator-druid/pull/8323#discussion_r315527431
 
 

 ##########
 File path: 
server/src/main/java/org/apache/druid/segment/realtime/appenderator/SinkQuerySegmentWalker.java
 ##########
 @@ -231,7 +231,7 @@ public SegmentDescriptor apply(final PartitionChunk<Sink> 
chunk)
                                                       // 1) Only use caching 
if data is immutable
                                                       // 2) Hydrants are not 
the same between replicas, make sure cache is local
                                                       if 
(hydrantDefinitelySwapped && cache.isLocal()) {
-                                                        QueryRunner<T> 
cachingRunner = new CachingQueryRunner<>(
+                                                        QueryRunner<T> 
cachingRunner = new CloseableCachingQueryRunner<>(
 
 Review comment:
   It looks a good idea and I have modified the code correspondingly.
   I am not very sure about "Reduce the level of nesting". I think at least we 
need 2-level cycles in code. One for each sink/segment in time line and the 
other for each hydrant in sink. 
   I just change `FunctionalIterable.create(specs).transform()` to 
`Iterables.transform()` currently. Is it what you want? 
   FYI, the new resolution has been cherry-picked into 0.15.1-incubating branch 
locally and it works well at our cluster.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to