Brilliant idea. I will do more experiment.


> On Feb 23, 2018, at 11:25 AM, Xuelei Fan <> wrote:
> Hi Weijun,
> I thought more about the performance impact.  The impact may mainly come from 
> the big size of the cached entries.
> The current implementation needs to go over the full list: find the 1st 
> expired item and then remove the rest.  The previous one have an additional 
> round with entries.indexOf().  Could we just start from the end of the list?
>    while (true) {
>       E e = entries.removeLast();
>       if e not expired {
>          add it back and break;
>       }
>    };
> If the list is really big, and the lifetime is significant big as well (>> 1 
> minute), iterate from the oldest item (backward from the end of the list) may 
> be much more effective.  LinkedList itself is not synchronized, so as if 
> there is not too much items to go over, the performance should be fine.  I'm 
> hesitate to hard code a cleanup every 1 minute strategy.  If we clean often, 
> there may be not too much items to go over in the list.  So we might be able 
> to remove the "at most every minute" strategy.
> Xuelei
> On 2/22/2018 5:55 PM, Weijun Wang wrote:
>> Updated webrev at
>>> On Feb 23, 2018, at 9:02 AM, Weijun Wang <> wrote:
>>> You mean I can save it somewhere and only update it when a cleanup is 
>>> performed?
>>> This should be better. Now there will be only isEmpty(), getFirst() and 
>>> addFirst(), and one less getLast().
>>> Thanks
>>> Max
>>>> On Feb 23, 2018, at 1:45 AM, Xuelei Fan <> wrote:
>>>> Looks like list synchronization is a factor of the performance impact. 
>>>> Maybe, you can have a private time for the oldest entry so don't 
>>>> access/iterate/cleanup entries list until necessary.  The "at most every 
>>>> minute" may be not a good strategy in some situations.
>> In fact, it's now almost "exactly every minute". What situations do you 
>> think it's not good? I cannot use size() because I have to remember all 
>> entries with lifespan to be correct.
>> Thanks
>> Max
>>>> Xuelei
>>>> On 2/22/2018 12:36 AM, Weijun Wang wrote:
>>>>> Please take a review at
>>>>> Two notes:
>>>>> 1. I tried list.subList(here, end).clear() but it's not faster.
>>>>> 2. I have looked at ConcurrentHashMap + ConcurrentSkipListMap but will 
>>>>> need more time to verify its correctness and measure the performance 
>>>>> gain. Since the bug is reported on 8u, a safer fix looks better.
>>>>> Noreg-perf.
>>>>> Thanks
>>>>> Max

Reply via email to