Ok, after more review of Velocity code and also our usage, it occurs to me that what we may be doing wrong is not using the "toolbox" paradigm.
Essentially what we want is complete integration, such that we handle all the request/response and all we want from Velocity is it's processing & output which we merge as we need throughout the rendering cycle. We essentially have three tiers at which we delegate to Velocity rendering: - wrapping a given portlet (this is like drawing a window, with menus, status bar, etc., in a GUI toolkit) - processing the layout of the page (like a desktop refresh which positions the individual portlets where they go on the page) - processing our dynamic CMS content within arbitrary portlets on any page (these are your standard contents any website usually has, news articles, ads, etc.) So, within a given request to the portal you will pass through at least two of these tiers, and usually N portlet wrappings (1 per portlet on the given page), and 1 layout rendering (, possibily N CMS content renderings). Each tier is completely independent of the other, has different set of params. For example: Suppose we have a page with 20 news articles (rendered using VTL) in 20 portlets (wrappings rendered separately using VTL) on a two column layout (rendered in VTL). That makes 20+20+1 different times when we do: VelocityContext vc = new VelocityContext(); ... add all the tools ... render during a single request. (Keep in mind this is a worst case scenario, because we do have caching...BUT...) Even the set of utility functions (tool classes) available during each may be slightly different. So, I believe that for our scenario, we should probably be using the "toolbox" approach because we are re-creating and re-populating the same list of tools (and params) into a new context on every request. We might have three re-usable toolbox configurations, one for each type of Velocity usage. Our tools are all thread safe already, so that's not an issue. So, if we were to do this, would you expect that we would decrease the contention on the method cache? Thanks for your input, Raymond On Thu, 2008-07-24 at 22:44 -0700, Nathan Bubna wrote: > On Thu, Jul 24, 2008 at 10:31 PM, Raymond Auge <[EMAIL PROTECTED]> wrote: > > Hello Nathan, > > > > We might be willing to move to 1.6-dev, but it really depends on it's > > stability. > > > > How would you compare it to the current 1.5 release? > > a few less bugs, much less memory use, generally faster, and has new > toys like support for vararg method calls and calling List methods on > arrays. :) > > > Is it as stable? > > if API stability is what you are curious about, the only external API > that i recall offhand as being changed is the StringResourceLoader, as > the 1.5 version was broken. if by "as stable" you mean "as reliable", > i think that it is, but i don't use it in any situations where it is a > high load bottleneck for me. so, my opinion there probably means less > than you just trying it out yourself. :) > > > Ray > > > > On Thu, 2008-07-24 at 22:20 -0700, Nathan Bubna wrote: > > > > On Thu, Jul 24, 2008 at 10:15 PM, Raymond Auge <[EMAIL PROTECTED]> wrote: > >> Hello Nathan, > >> > >> I just finished writing a alternate UberspectImpl based on our own > >> MethodCache implementation. I'll let you know if we notice any > >> significant changes in performance. > > > > Please do, and if so, would you be willing to share your code too? > > > >> Ray > >> > >> PS: We had already done some tweaking to use ConcurrentHashMap and > >> removed some sync blocks in the cache... but we still hit a bottleneck. > > > > If you're willing to do such tweaks, then i'd highly recommend > > starting with the current head version (Velocity 1.6-dev). as has > > been said, there have already been a lot of performance tweaks made, > > and there are more in the pipeline already, just waiting on some > > confirmation (see VELOCITY-606 and VELOCITY-595 for those). > > > >> > >> On Thu, 2008-07-24 at 22:02 -0700, Nathan Bubna wrote: > >> > >>> On Thu, Jul 24, 2008 at 2:53 PM, Raymond Auge <[EMAIL PROTECTED]> wrote: > >>> #snip() > >>> > Under heavy load we hit a max throughput and thread dumps during this > >>> > time are completely filled with BLOCKED threads as bellow: > >>> > > >>> > [snip] > >>> > "http-80-Processor47" daemon prio=10 tid=0x00002aabbdb90400 nid=0x5a59 > >>> > waiting for monitor entry [0x0000000044c72000..0x0000000044c74a80] > >>> > java.lang.Thread.State: BLOCKED (on object monitor) > >>> > at > >>> > > >>> > org.apache.velocity.util.introspection.IntrospectorBase.getMethod(IntrospectorBase.java:103) > >>> > - waiting to lock <0x00002aaad093d940> (a > >>> > org.apache.velocity.util.introspection.IntrospectorCacheImpl) > >>> > at > >>> > > >>> > org.apache.velocity.util.introspection.Introspector.getMethod(Introspector.java:101) > >>> #snip() > >>> > >>> I do find it interesting that there is so much blocking going on at > >>> this particular point. I didn't appearing all that high on any of the > >>> profiler outputs yet. Perhaps that's just oversight on my part or > >>> perhaps that may be because of the heavy evaluate() use in this > >>> particular case, but still, if we can find a way to speed it up, that > >>> would be good nonetheless. I'll look into it a bit. It may turn out > >>> to be another spot that mostly needs to wait for the JDK 1.5 > >>> concurrency classes, but perhaps there is something that can be done. > >>> I do notice right off the bat that the synchronization of the get() > >>> and put() methods of IntrospectorCacheImpl seems unnecessary as they > >>> are being used within a block synchronized on their instance. With > >>> re-entrant synchronization that might not make a big difference, but > >>> it's something. I bet we could also be more fine-grained here and > >>> synchronize on something like the Class being introspected. > >>> > >>> --------------------------------------------------------------------- > >>> To unsubscribe, e-mail: [EMAIL PROTECTED] > >>> For additional commands, e-mail: [EMAIL PROTECTED] > >>> > >>> > >> > >> ---------------------------------- > >> Raymond Augé > >> Software Engineer > >> Liferay, Inc. > >> Enterprise. Open Source. For Life. > >> ---------------------------------- > >> > >> Liferay Meetup 2008 – Los Angeles > >> > >> August 1, 2008 > >> > >> Meet and brainstorm with the creators of Liferay Portal, our partners > >> and other members of our community! > >> > >> The day will consist of a series of technical sessions presented by our > >> integration and services partners. There is time set aside for Q&A and > >> corporate brainstorming to give the community a chance to give feedback > >> and make suggestions! > >> > >> View Event Details > >> > >> Register Now > >> > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: [EMAIL PROTECTED] > > For additional commands, e-mail: [EMAIL PROTECTED] > > > > > > ---------------------------------- > > Raymond Augé > > Software Engineer > > Liferay, Inc. > > Enterprise. Open Source. For Life. > > ---------------------------------- > > > > Liferay Meetup 2008 – Los Angeles > > > > August 1, 2008 > > > > Meet and brainstorm with the creators of Liferay Portal, our partners and > > other members of our community! > > > > The day will consist of a series of technical sessions presented by our > > integration and services partners. There is time set aside for Q&A and > > corporate brainstorming to give the community a chance to give feedback and > > make suggestions! > > > > View Event Details > > > > Register Now > ---------------------------------- Raymond Augé Software Engineer Liferay, Inc. Enterprise. Open Source. For Life. ---------------------------------- Liferay Meetup 2008 – Los Angeles August 1, 2008 Meet and brainstorm with the creators of Liferay Portal, our partners and other members of our community! The day will consist of a series of technical sessions presented by our integration and services partners. There is time set aside for Q&A and corporate brainstorming to give the community a chance to give feedback and make suggestions! View Event Details Register Now
