ivankelly commented on a change in pull request #832: Issue 620: Close the 
fileChannels for read when they are idle
URL: https://github.com/apache/bookkeeper/pull/832#discussion_r167524549
 
 

 ##########
 File path: 
bookkeeper-server/src/main/java/org/apache/bookkeeper/bookie/EntryLogger.java
 ##########
 @@ -330,54 +333,216 @@ private int readFromLogChannel(long entryLogId, 
BufferedReadChannel channel, Byt
      * A thread-local variable that wraps a mapping of log ids to 
bufferedchannels
      * These channels should be used only for reading. logChannel is the one
      * that is used for writes.
+     * We use this Guava cache to store the BufferedReadChannel.
+     * When the BufferedReadChannel is removed, the underlying fileChannel's 
refCnt decrease 1,
+     * temporally use 1h to relax replace after reading.
      */
-    private final ThreadLocal<Map<Long, BufferedReadChannel>> logid2Channel =
-            new ThreadLocal<Map<Long, BufferedReadChannel>>() {
+    private final ThreadLocal<Cache<Long, BufferedReadChannel>> logid2Channel =
+            new ThreadLocal<Cache<Long, BufferedReadChannel>>() {
         @Override
-        public Map<Long, BufferedReadChannel> initialValue() {
+        public Cache<Long, BufferedReadChannel> initialValue() {
             // Since this is thread local there only one modifier
             // We dont really need the concurrency, but we need to use
             // the weak values. Therefore using the concurrency level of 1
-            return new MapMaker().concurrencyLevel(1)
-                .weakValues()
-                .makeMap();
+            return CacheBuilder.newBuilder().concurrencyLevel(1)
+                    .expireAfterAccess(readChannelCacheExpireTimeMs, 
TimeUnit.MILLISECONDS)
+                    //decrease the refCnt
+                    .removalListener(removal -> logid2FileChannel.get((Long) 
removal.getKey()).release())
+                    .build(readChannelLoader);
+        }
+    };
+
+    @VisibleForTesting
+    long getReadChannelCacheExpireTimeMs() {
+        return readChannelCacheExpireTimeMs;
+    }
+
+    @VisibleForTesting
+    CacheLoader<Long, BufferedReadChannel> getReadChannelLoader() {
+        return readChannelLoader;
+    }
+
+    private final  CacheLoader<Long, BufferedReadChannel> readChannelLoader =
+            new CacheLoader<Long, BufferedReadChannel> () {
+        public BufferedReadChannel load(Long entryLogId) throws Exception {
+
+            return getChannelForLogId(entryLogId);
+
         }
     };
 
     /**
-     * Each thread local buffered read channel can share the same file handle 
because reads are not relative
-     * and don't cause a change in the channel's position. We use this map to 
store the file channels. Each
-     * file channel is mapped to a log id which represents an open log file.
+     * FileChannelBackingCache used to cache RefCntFileChannels for read.
+     * In order to avoid get released file, adopt design of 
FileInfoBackingCache.
+     * @see FileInfoBackingCache
      */
-    private final ConcurrentMap<Long, FileChannel> logid2FileChannel = new 
ConcurrentHashMap<Long, FileChannel>();
+    class FileChannelBackingCache {
+        static final int DEAD_REF = -0xdead;
+
+        final ConcurrentHashMap<Long, CachedFileChannel> fileChannels = new 
ConcurrentHashMap<>();
+
+        CachedFileChannel loadFileChannel(long logId) throws IOException {
+            CachedFileChannel cachedFileChannel = fileChannels.get(logId);
+            if (cachedFileChannel != null) {
+                boolean retained = cachedFileChannel.tryRetain();
+                assert(retained);
+                return cachedFileChannel;
+            }
+            File file = findFile(logId);
+            // get channel is used to open an existing entry log file
+            // it would be better to open using read mode
+            FileChannel newFc = new RandomAccessFile(file, "r").getChannel();
+            cachedFileChannel = new CachedFileChannel(logId, newFc);
+            fileChannels.put(logId, cachedFileChannel);
+            boolean retained = cachedFileChannel.tryRetain();
+            assert(retained);
+            return cachedFileChannel;
+        }
+
+        /**
+         * close FileChannel and remove from cache when possible.
+         * @param logId
+         * @param fc
+         */
+        private void releaseFileChannel(long logId, CachedFileChannel fc) {
+            if (fc.markDead()) {
+                try {
+                    fc.fc.close();
+                } catch (IOException e) {
+                    LOG.warn("Exception occurred in 
ReferenceCountedFileChannel"
+                            + " while closing channel for log file: {}", fc);
+                } finally {
+                    IOUtils.close(LOG, fc.fc);
+                }
+                fileChannels.remove(logId);
 
 Review comment:
   The write lock is needed because marking a fileinfo as dead, flushing its 
contents to disk and removing it from the map needs to be an atomic operation. 
If the lock wasn't there, then we could mark as dead, then before removing from 
map another thread could call loadFileInfo() and end up returning a dead 
fileinfo (tryRetain would fail). this would put the caller into a tight loop. 
We could, i think, remove the lock entirely from this class, but I'm very 
reluctant to mess with it, since this stuff is hard to get right (and it's 
often very hard to tell that you got it wrong).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to