[ 
http://issues.apache.org/jira/browse/HIVEMIND-162?page=comments#action_12359580 
] 

Jesse Kuhnert commented on HIVEMIND-162:
----------------------------------------

Using some of the concurrent API to do these synchronizations may also help 
quite a bit. It's part of 1.5 already but still downloadable seperately. 

Using more efficient "Monitors" and such to synchronize access will make the 
threading infrastructure much more scalable. I've personally done a lot of work 
in the area of multi-cpu environment appserver-ish threading management and 
think the concurrent api has answers to most of your needs..

> Performance bottleneck with threaded services
> ---------------------------------------------
>
>          Key: HIVEMIND-162
>          URL: http://issues.apache.org/jira/browse/HIVEMIND-162
>      Project: HiveMind
>         Type: Bug
>   Components: framework
>     Versions: 1.1
>  Environment: Linux 2.6.14.2, JBoss 4.0.3, Tomcat 5.5
> Quad Xeon 3.2GHz, 1GB RAM
>     Reporter: Jeff Lubetkin

>
> Note: This may be better classified as a Hivemind issue, but it's affecting 
> Tapestry throughput so I'm putting it here.
> We've been running some perf tests using the Grinder (with 20 threads) 
> generating as much load as possible on a single non-trivial page.  The page 
> doesn't touch many of our biz logic services, but does have some complex 
> componentry to render.
> We were seeing performance ramp just fine until we reached about 200TPS, 
> using only 50% CPU.  No matter how many clients we threw at it, we couldn't 
> get it any higher.   A thread dump showed that most threads were bottlenecked 
> on a synchronized method in HiveMind 
> (servicemodel.ThreadedServiceModel.constructServiceForCurrentThread, see 
> Stack #1 below).  This was the construction of the threaded 
> ClientPropertyPersistenceStrategy service.  Since we don't use the client 
> strategy, I did a little hivemodule.xml magic and got rid of it from the 
> PersistenceStrategy configuration.  This gave us a huge increase in 
> throughput, up to 490TPS still using only about 50% CPU, but we still 
> bottlenecked.  Again, a threaddump showed the culprit as 
> constructServiceForCurrentThread, this time in the storage of the 
> RequestGlobals (see Stack #2).  We can't remove this service, so we've hit a 
> ceiling.
> Until either Tapestry changes its usage of threaded services, or Hivemind is 
> changed to not synchronize in this way, this looks like a ceiling for 
> Tapestry performance.  It'd be nice to be able to use all of the CPU on the 
> box :)
> ================== STACK #1 ================== 
>     [java] "http-0.0.0.0-8080-7" daemon prio=1 tid=0x6eaf62e0 nid=0x1261 
> waiting for monitor entry [0x6f69f000..0x6f6a0
> 840]
>      [java]     at 
> org.apache.hivemind.impl.servicemodel.ThreadedServiceModel.constructServiceForCurrentThread(ThreadedS
> erviceModel.java:166)
>      [java]     - waiting to lock <0x4e861d88> (a 
> org.apache.hivemind.impl.servicemodel.ThreadedServiceModel)
>      [java]     at 
> org.apache.hivemind.impl.servicemodel.ThreadedServiceModel.getServiceImplementationForCurrentThread(T
> hreadedServiceModel.java:157)
>      [java]     at 
> $PropertyPersistenceStrategy_107fc3dfcff._service($PropertyPersistenceStrategy_107fc3dfcff.java)
>      [java]     at 
> $PropertyPersistenceStrategy_107fc3dfcff.getStoredChanges($PropertyPersistenceStrategy_107fc3dfcff.ja
> va)
>      [java]     at 
> $PropertyPersistenceStrategy_107fc3dfd00.getStoredChanges($PropertyPersistenceStrategy_107fc3dfd00.ja
> va)
>      [java]     at 
> org.apache.tapestry.record.PropertyPersistenceStrategySourceImpl.getAllStoredChanges(PropertyPersiste
> nceStrategySourceImpl.java:73)
>      [java]     at 
> $PropertyPersistenceStrategySource_107fc3dfc3c.getAllStoredChanges($PropertyPersistenceStrategySource
> _107fc3dfc3c.java)
>      [java]     at 
> org.apache.tapestry.record.PageRecorderImpl.getChanges(PageRecorderImpl.java:68)
>      [java]     at 
> org.apache.tapestry.record.PageRecorderImpl.rollback(PageRecorderImpl.java:73)
>      [java]     at 
> org.apache.tapestry.engine.RequestCycle.loadPage(RequestCycle.java:277)
>      [java]     at 
> org.apache.tapestry.engine.RequestCycle.getPage(RequestCycle.java:249)
>      [java]     at 
> org.apache.tapestry.engine.RequestCycle.activate(RequestCycle.java:612)
>      [java]     at 
> org.apache.tapestry.engine.PageService.service(PageService.java:66)
>      [java]     at 
> $IEngineService_107fc3dfc4c.service($IEngineService_107fc3dfc4c.java)
>      [java]     at 
> org.apache.tapestry.services.impl.EngineServiceOuterProxy.service(EngineServiceOuterProxy.java:65)
>      [java]     at 
> org.apache.tapestry.engine.AbstractEngine.service(AbstractEngine.java:248)
>      [java]     at 
> org.apache.tapestry.services.impl.InvokeEngineTerminator.service(InvokeEngineTerminator.java:60)
>      [java]     at 
> $WebRequestServicer_107fc3dfc28.service($WebRequestServicer_107fc3dfc28.java)
>      [java]     at 
> $WebRequestServicer_107fc3dfc24.service($WebRequestServicer_107fc3dfc24.java)
>      [java]     at 
> org.apache.tapestry.services.impl.WebRequestServicerPipelineBridge.service(WebRequestServicerPipeline
> Bridge.java:56)
>      [java] etc...
> ================== STACK #2 ================== 
>      [java] "http-0.0.0.0-8080-52" daemon prio=1 tid=0x89613328 nid=0x20a9 
> waiting for monitor entry [0x86d7e000..0x86d7
> f740]
>      [java]     at 
> org.apache.hivemind.impl.servicemodel.ThreadedServiceModel.constructServiceForCurrentThread(ThreadedS
> erviceModel.java:166)
>      [java]     - waiting to lock <0x9503a218> (a 
> org.apache.hivemind.impl.servicemodel.ThreadedServiceModel)
>      [java]     at 
> org.apache.hivemind.impl.servicemodel.ThreadedServiceModel.getServiceImplementationForCurrentThread(T
> hreadedServiceModel.java:157)
>      [java]     at 
> $RequestGlobals_1080261890e._service($RequestGlobals_1080261890e.java)
>      [java]     at 
> $RequestGlobals_1080261890e.store($RequestGlobals_1080261890e.java)
>      [java]     at 
> $RequestGlobals_1080261890f.store($RequestGlobals_1080261890f.java)
>      [java]     at 
> org.apache.tapestry.services.impl.WebRequestServicerPipelineBridge.service(WebRequestServicerPipeline
> Bridge.java:49)
>      [java]     at 
> $ServletRequestServicer_108026188f4.service($ServletRequestServicer_108026188f4.java)
>      [java]     at 
> org.apache.tapestry.request.DecodedRequestInjector.service(DecodedRequestInjector.java:55)
>      [java]     at 
> $ServletRequestServicerFilter_108026188f0.service($ServletRequestServicerFilter_108026188f0.java)
>      [java]     at 
> $ServletRequestServicer_108026188f6.service($ServletRequestServicer_108026188f6.java)
>      [java]     at 
> org.apache.tapestry.multipart.MultipartDecoderFilter.service(MultipartDecoderFilter.java:52)
>      [java]     at 
> $ServletRequestServicerFilter_108026188ee.service($ServletRequestServicerFilter_108026188ee.java)
>      [java]     at 
> $ServletRequestServicer_108026188f6.service($ServletRequestServicer_108026188f6.java)
>      [java]     at 
> org.apache.tapestry.services.impl.SetupRequestEncoding.service(SetupRequestEncoding.java:53)
>      [java]     at 
> $ServletRequestServicerFilter_108026188f2.service($ServletRequestServicerFilter_108026188f2.java)
>      [java]     at 
> $ServletRequestServicer_108026188f6.service($ServletRequestServicer_108026188f6.java)
>      [java]     at 
> $ServletRequestServicer_108026188e8.service($ServletRequestServicer_108026188e8.java)
>      [java]     at 
> org.apache.tapestry.ApplicationServlet.doService(ApplicationServlet.java:123)
>      [java]     at 
> com.zillow.web.ZillowServlet.doService(ZillowServlet.java:35)
>      [java]     at 
> org.apache.tapestry.ApplicationServlet.doGet(ApplicationServlet.java:79)
>      [java]     at 
> javax.servlet.http.HttpServlet.service(HttpServlet.java:697)
>      [java]     at 
> javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
>      [java]     at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
>      [java] etc...

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to