[ 
https://issues.apache.org/jira/browse/DERBY-2911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12551766
 ] 

Knut Anders Hatlen commented on DERBY-2911:
-------------------------------------------

Committed d2911-11.diff with revision 604153.

I think that means that all the functionality of the old buffer manager is also 
implemented in the new buffer manager.

In an earlier comment, I mentioned that we might see more concurrency when the 
eviction frequency is high if we replace the synchronization  lock on the 
ArrayList in ClockPolicy with a ReadWriteLock. I made quick patch and ran a 
couple of tests, but I couldn't see any significant improvement, so I'll just 
leave it as it is for now. If it later turns out to be a bottleneck, it'll be 
very simple to change it.

I plan to rerun the tests I ran earlier on different hardware and with 
different page cache sizes to see if we still have the performance gains we had 
then, and also to see if there is some load/configuration where the performance 
is worse with the new buffer manager. I will post the results once I have them.

I would appreciate it if someone would review the code. The new buffer manager 
does not share any code with the old buffer manager, and it is completely 
contained in these files in the impl/services/cache directory:

  ConcurrentCacheFactory.java - implementation of the CacheFactory interface 
and contains only a single factory method to create a  ConcurrentCache instance

  ConcurrentCache.java - the CacheManager implementation. This class contains 
the code needed to insert, find, release and remove items in the cache, and it 
is build around a ConcurrentHashMap

  CacheEntry.java - wrapper/holder object for objects in the cache. This object 
contains some state for the entry (keep count, validity, etc), and code which 
enables fine-grained synchronization (ReentrantLock on entry level instead of 
synchronization on cache level as in the old buffer manager)

  ReplacementPolicy.java/ClockPolicy.java - interface and implementation of the 
replacement algorithm, that is the algorithm that is used to evict objects from 
the cache to make room for new objects when the cache is full

  BackgroundCleaner.java - a Serviceable that is used by the cache manager to 
perform asynchronous I/O or other background tasks

> Implement a buffer manager using java.util.concurrent classes
> -------------------------------------------------------------
>
>                 Key: DERBY-2911
>                 URL: https://issues.apache.org/jira/browse/DERBY-2911
>             Project: Derby
>          Issue Type: Improvement
>          Components: Performance, Services
>    Affects Versions: 10.4.0.0
>            Reporter: Knut Anders Hatlen
>            Assignee: Knut Anders Hatlen
>            Priority: Minor
>         Attachments: cleaner.diff, cleaner.tar, d2911-1.diff, d2911-1.stat, 
> d2911-10.diff, d2911-10.stat, d2911-11.diff, d2911-2.diff, d2911-3.diff, 
> d2911-4.diff, d2911-5.diff, d2911-6.diff, d2911-6.stat, d2911-7.diff, 
> d2911-7a.diff, d2911-9.diff, d2911-9.stat, d2911-entry-javadoc.diff, 
> d2911-unused.diff, d2911-unused.stat, d2911perf.java, derby-2911-8.diff, 
> derby-2911-8.stat, perftest6.pdf, poisson_patch8.tar
>
>
> There are indications that the buffer manager is a bottleneck for some types 
> of multi-user load. For instance, Anders Morken wrote this in a comment on 
> DERBY-1704: "With a separate table and index for each thread (to remove latch 
> contention and lock waits from the equation) we (...) found that 
> org.apache.derby.impl.services.cache.Clock.find()/release() caused about 5 
> times more contention than the synchronization in LockSet.lockObject() and 
> LockSet.unlock(). That might be an indicator of where to apply the next push".
> It would be interesting to see the scalability and performance of a buffer 
> manager which exploits the concurrency utilities added in Java SE 5.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to