[ 
https://issues.apache.org/jira/browse/TAPESTRY-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard M. Lewis Ship closed TAPESTRY-2006.
------------------------------------------

       Resolution: Fixed
    Fix Version/s: 5.0.8

Let's see how well this works out in the field. I think is going to be a big 
win.

> Replace naive page pool mechanism with a more realistic one that can handle 
> larger sites
> ----------------------------------------------------------------------------------------
>
>                 Key: TAPESTRY-2006
>                 URL: https://issues.apache.org/jira/browse/TAPESTRY-2006
>             Project: Tapestry
>          Issue Type: New Feature
>          Components: tapestry-core
>    Affects Versions: 5.0.7
>            Reporter: Howard M. Lewis Ship
>            Assignee: Howard M. Lewis Ship
>            Priority: Critical
>             Fix For: 5.0.8
>
>
> The current page pooling mechanism is not very smart:  pages are cached in 
> memory forever, regardless of whether they are ever used, and a new page 
> instance will be created any time a page is needed. 
> A less naive implementation would limit the number of page instances.
> Page instances should be purged periodically, based on a LRU algorithm.  The 
> cutoff time should be configurable.
> The instance pool for a page/locale combination should track the number of 
> created instances.  There should be a hard and soft limit on the number of 
> page instances; that is, track the number of page instances currently "in 
> play".  If the soft limit is exceeded, wait a short time (a few milliseconds, 
> configurable) for an instance to become available, then create a fresh 
> instance (unless the hard limit has been reached).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to