Yep...each core will handle 4 hardware threads. It was built for multiple threads and a JVM (multithreaded of course) is made for this box. For an appserver under load, I think the numbers should be impressive on this box.
The cool thing about this machine is, each core is viewed as 4 cpus. That means this box looks like it has 16 CPUs running (4 cores x 4 hardware threads)! I can't wait to pound on this and see what it can do. I am going to have T2000->T2000 trials going on to maximize the throughput for CPU. They come with 4 1 Ghz Ethernet ports each, so I am going to pipe these up and hopefully there will be zero bottleneck on IO. Jeff Matt Hogstrom wrote: > I'm interested...the cool threads should be interesting. This is 4 > cores and 8 threads per core or something like that ? > > > > Jeff Genender wrote: >> I will have some numbers soon too... >> >> We are running on some Sun T2000 4 core machines. The numbers should be >> interesting on these boxes ;-) >> >> Jeff >> >> Matt Hogstrom wrote: >> >>> I'm going to start externalizing the data as it evolves and Adopt >>> Aaron's progresive disclosure method of publishing. >>> >>> As a first attempt to get us n the same page here is a drawing of my >>> configuration: >>> >>> http://people.apache.org/~hogstrom/performance/PerformanceTestBed.pdf >>> >>> There is a png as well if that's better for you but it takes longer to >>> download. What does your test bed look like? >>> >>> Also, in my testing I'm not receiving any errors (non reported any >>> way). I think we need to be careful with results that are failing. I'm >>> assuming your running on a 4-way. I'd like to work on resolving the >>> problems your having so we'll be comparing similar data. >>> >>> Matt >>> >>> Maxim Berkultsev wrote: >>> >>>> Matt, >>>> >>>> first of all sorry for confusion - all the throughput values should be >>>> divided by 60:). >>>> >>>> I've rerun both tests. >>>> >>>> The parameters for the run were the ones you've mentioned - 100 users >>>> with 10 ms delay for each request from a user. Also for the second >>>> scenario I've changed configuration to Direct Mode with 5000 Users and >>>> 10000 quotes. >>>> >>>> Here is the data I've got: >>>> >>>> ---------------------------------- >>>> >>>> Example 1: PingServlet2SessionEJB >>>> Your data: 100 168 582 >>>> Mine: 100 472 145 >>>> >>>> About 6600 samples (66 requests per thread) were used - about 10 >>>> percents of responses had "Non HTTP response code". >>>> >>>> The resulting throughput was calculated with 8754/60=145. >>>> >>>> ----------------------------------- >>>> >>>> Example 2: PingServlet2TwoPhase >>>> Your data: 100 2096 46 >>>> Mine: 100 3324 29 >>>> >>>> 4000 samples were provided. >>>> >>>> But only ~10 percents of queries return 200OK, the rest - 500 Error. >>>> >>>> Here is a typical error log I was receiving from the server: >>>> >>>> ******************************* >>>> >>>> java.lang.NullPointerException >>>> at >>>> org.apache.jsp.error_jsp._jspService(Ljavax.servlet.http.HttpServletR >>>> equest;Ljavax.servlet.http.HttpServletResponse;)V(org.apache.jsp.error_jsp:96) >>>> >>>> >>>> at >>>> org.apache.jasper.runtime.HttpJspBase.service(Ljavax.servlet.http.Htt >>>> pServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(HttpJspBase.java:97) >>>> >>>> >>>> at >>>> javax.servlet.http.HttpServlet.service(Ljavax.servlet.ServletRequest; >>>> Ljavax.servlet.ServletResponse;)V(Optimized Method) >>>> at >>>> org.apache.jasper.servlet.JspServletWrapper.service(Ljavax.servlet.ht >>>> tp.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;Z)V(JspServletWrap >>>> >>>> >>>> per.java:322) >>>> at >>>> org.apache.jasper.servlet.JspServlet.serviceJspFile(Ljavax.servlet.ht >>>> tp.HttpServletRequest;Ljavax.servlet.http.HttpServletResponse;Ljava.lang.String; >>>> >>>> >>>> Ljava.lang.Throwable;Z)V(JspServlet.java:314) >>>> at >>>> org.apache.jasper.servlet.JspServlet.service(Ljavax.servlet.http.Http >>>> ServletRequest;Ljavax.servlet.http.HttpServletResponse;)V(JspServlet.java:264) >>>> >>>> >>>> at >>>> javax.servlet.http.HttpServlet.service(Ljavax.servlet.ServletRequest; >>>> Ljavax.servlet.ServletResponse;)V(Optimized Method) >>>> at >>>> org.mortbay.jetty.servlet.ServletHolder.handle(Ljavax.servlet.Servlet >>>> Request;Ljavax.servlet.ServletResponse;)V(ServletHolder.java:428) >>>> at >>>> org.apache.geronimo.jetty.JettyServletHolder.handle(Ljavax.servlet.Se >>>> rvletRequest;Ljavax.servlet.ServletResponse;)V(JettyServletHolder.java:99) >>>> >>>> >>>> at >>>> org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter( >>>> Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(WebApplicationHa >>>> >>>> >>>> ndler.java:830) >>>> at >>>> org.mortbay.jetty.servlet.JSR154Filter.doFilter(Ljavax.servlet.Servle >>>> tRequest;Ljavax.servlet.ServletResponse;Ljavax.servlet.FilterChain;)V(JSR154Filt >>>> >>>> >>>> er.java:170) >>>> at >>>> org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter( >>>> Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(WebApplicationHa >>>> >>>> >>>> ndler.java:821) >>>> at >>>> org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(Ljava.lang.S >>>> tring;Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResp >>>> >>>> >>>> onse;Lorg.mortbay.jetty.servlet.ServletHolder;I)V(WebApplicationHandler.java:471 >>>> >>>> >>>> ) >>>> at >>>> org.mortbay.jetty.servlet.Dispatcher.dispatch(Ljavax.servlet.ServletR >>>> equest;Ljavax.servlet.ServletResponse;I)V(Dispatcher.java:283) >>>> at >>>> org.mortbay.jetty.servlet.Dispatcher.error(Ljavax.servlet.ServletRequ >>>> est;Ljavax.servlet.ServletResponse;)V(Dispatcher.java:179) >>>> at >>>> org.mortbay.jetty.servlet.ServletHttpResponse.sendError(ILjava.lang.S >>>> tring;)V(ServletHttpResponse.java:415) >>>> at >>>> javax.servlet.http.HttpServletResponseWrapper.sendError(ILjava.lang.S >>>> tring;)V(HttpServletResponseWrapper.java:107) >>>> at >>>> org.apache.geronimo.samples.daytrader.web.prims.PingServlet2TwoPhase. >>>> doGet(Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResp >>>> >>>> >>>> onse;)V(Optimized Method) >>>> at >>>> javax.servlet.http.HttpServlet.service(Ljavax.servlet.http.HttpServle >>>> tRequest;Ljavax.servlet.http.HttpServletResponse;)V(Optimized Method) >>>> at >>>> javax.servlet.http.HttpServlet.service(Ljavax.servlet.ServletRequest; >>>> Ljavax.servlet.ServletResponse;)V(Optimized Method) >>>> at >>>> org.mortbay.jetty.servlet.ServletHolder.handle(Ljavax.servlet.Servlet >>>> Request;Ljavax.servlet.ServletResponse;)V(ServletHolder.java:428) >>>> at >>>> org.apache.geronimo.jetty.JettyServletHolder.handle(Ljavax.servlet.Se >>>> rvletRequest;Ljavax.servlet.ServletResponse;)V(JettyServletHolder.java:99) >>>> >>>> >>>> at >>>> org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter( >>>> Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(WebApplicationHa >>>> >>>> >>>> ndler.java:830) >>>> at >>>> org.mortbay.jetty.servlet.JSR154Filter.doFilter(Ljavax.servlet.Servle >>>> tRequest;Ljavax.servlet.ServletResponse;Ljavax.servlet.FilterChain;)V(JSR154Filt >>>> >>>> >>>> er.java:170) >>>> at >>>> org.mortbay.jetty.servlet.WebApplicationHandler$CachedChain.doFilter( >>>> Ljavax.servlet.ServletRequest;Ljavax.servlet.ServletResponse;)V(WebApplicationHa >>>> >>>> >>>> ndler.java:821) >>>> at >>>> org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(Ljava.lang.S >>>> tring;Ljavax.servlet.http.HttpServletRequest;Ljavax.servlet.http.HttpServletResp >>>> >>>> >>>> onse;Lorg.mortbay.jetty.servlet.ServletHolder;I)V(WebApplicationHandler.java:471 >>>> >>>> >>>> ) >>>> at >>>> org.mortbay.jetty.servlet.ServletHandler.handle(Ljava.lang.String;Lja >>>> va.lang.String;Lorg.mortbay.http.HttpRequest;Lorg.mortbay.http.HttpResponse;)V(O >>>> >>>> >>>> ptimized Method) >>>> at >>>> org.mortbay.jetty.servlet.WebApplicationContext.handle(Ljava.lang.Str >>>> ing;Ljava.lang.String;Lorg.mortbay.http.HttpRequest;Lorg.mortbay.http.HttpRespon >>>> >>>> >>>> se;)V(Optimized Method) >>>> at >>>> org.mortbay.http.HttpContext.handle(Lorg.mortbay.http.HttpRequest;Lor >>>> g.mortbay.http.HttpResponse;)V(Optimized Method) >>>> at >>>> org.mortbay.http.HttpServer.service(Lorg.mortbay.http.HttpRequest;Lor >>>> g.mortbay.http.HttpResponse;)Lorg.mortbay.http.HttpContext;(Optimized >>>> Method) >>>> at >>>> org.mortbay.http.HttpConnection.service(Lorg.mortbay.http.HttpRequest >>>> ;Lorg.mortbay.http.HttpResponse;)Lorg.mortbay.http.HttpContext;(Optimized >>>> >>>> Method >>>> ) >>>> at org.mortbay.http.HttpConnection.handleNext()Z(Optimized >>>> Method) >>>> >>>> ******************************* >>>> ----------------------------------- >>>> >>>> To run experiments I was using my desktop with default Geronimo >>>> installation + jRockit 1.4.2_04. >>>> >>>> How one can avoid the problems with illegal response code when >>>> multiple threads are in work? Have I omitted some issue in Daytrader >>>> configuration before starting the experiment? >>>> >>>> Thank you for your help. >>>> >>>> -- >>>> Best regards, >>>> Maxim Berkultsev, Intel Middleware Products Division >>>> >>>> >>>> >>>> 2006/4/5, Matt Hogstrom wrote: >>>> >>>> >>>>> Maxim, >>>>> >>>>> Thanks for sharing your results. I have a whole set of numbers that >>>>> I've been sitting on. My tests >>>>> don't use JMeter so if you would like to share your setup for that >>>>> I'll incorporate it into the >>>>> DayTrader tree. >>>>> >>>>> All the tests I've run were with a fixed number of users of 100 with >>>>> 10ms think time. My goal was >>>>> to stress the server and see if it would stay up and how it would >>>>> perform. The system I'm testing >>>>> on is an Intel 2 x 3.0Ghz Potomac System. Each processor has 8MB L3 >>>>> Cache. My tests were conducted >>>>> with the Sun JDK (1.4.2_b09). The Database system is on a separate >>>>> box. I'm using DB2 as Oracle >>>>> has some clause in their license that does not allow publish of >>>>> benchmark results without their >>>>> express permission. >>>>> >>>>> See inline >>>>> >>>>> I'm rerunning some tests this afternoon as it looks like we're not >>>>> comparing teh same things. I am >>>>> using an internal Load Generator and would like to move something >>>>> Open Source so we can all compare >>>>> the same numbers. >>>>> >>>>> Matt >>>>> >>>>> Maxim Berkultsev wrote: >>>>> >>>>> >>>>>> Hi, all! >>>>>> >>>>>> Geronimo peak performance is under test and let me share some >>>>>> results. >>>>>> >>>>>> I was using JMeter and Daytrader web primitives to measure throughput >>>>>> for a fixed number of simultaneously woking virtual users. >>>>>> >>>>>> However I've realized that the results for the equal numbers of users >>>>>> in each scenario do not look valuable and tried to find some peak >>>>>> values for throughput depending on the number of users. It looks >>>>>> as if >>>>>> such peak values are reached when the number of users are minimal as >>>>>> well as the peak exists at some 'optimal' number of users. >>>>>> >>>>>> I've used two scenarios. >>>>>> >>>>>> Example 1: For scenario PingServlet2SessionEJB from Daytrader web >>>>>> primitives I've got max througput (~14670) for a minimal number of >>>>>> users - 5 with average time per single request equals to 17. The >>>>>> table >>>>>> below contains triples (number of users, average request time, >>>>>> throughput) for different number of users. >>>>>> >>>>>> ---------------- >>>>>> 5 17 14670 >>>>>> ---------------- >>>>>> 10 40 13037 >>>>>> ---------------- >>>>>> 50 188 12646 >>>>>> ---------------- >>>>>> 100 447 11028 >>>>> >>>>> 100 168 582 << I'm confused by this. Actually all these numbers >>>>> are way higher than I'm >>>>> achieving. Can you shed some light on your configuration? Also, I >>>>> assume your not getting 404's or >>>>> something? >>>>> >>>>> >>>>>> ---------------- >>>>>> 150 588 10770 >>>>>> ---------------- >>>>>> 200 634 10444 >>>>>> ---------------- >>>>>> >>>>>> It looks as if the peak is reached when a number of users is minimal. >>>>>> >>>>>> Example 2: In scenario PingServlet2TwoPhase the throughput grows to >>>>>> some saturation value and then begins to decrease. The maximal values >>>>>> (~1300-1350) for throughput covers a wide interval between >10 and >>>>>> 150 >>>>>> virtual users. Here are the triples' table (number of users, average >>>>>> request time, throughput): >>>>>> >>>>>> ---------------- >>>>>> 5, 390, 764 >>>>>> ---------------- >>>>>> 10, 492, 1207 >>>>>> ---------------- >>>>>> 50, 2250, 1314 >>>>>> ---------------- >>>>>> 100, 4380, 1356 >>>>> >>>>> 100 2096 46 << Again...something is out of sorts. Can you run >>>>> Direct Mode with 5000 Users >>>>> and 10000 quotes? >>>>> >>>>> >>>>>> ---------------- >>>>>> 150, 6580, 1350 >>>>>> ---------------- >>>>>> 200, 9050, 1260 >>>>>> ---------------- >>>>>> >>>>>> All values do not pretend to a significant:) accuracy but to some >>>>>> general trend. >>>>>> >>>>>> Somehow usually there is some 'common sense' number of users to be >>>>>> used in performance estimations. Can someone provide an idea how to >>>>>> find this value for Geronimo? >>>>>> >>>>>> Thank you. >>>>>> >>>>>> -- >>>>>> Best regards, >>>>>> Maxim Berkultsev, Intel Middleware Products Division >>>>>> >>>>>> >>>>>> >>>>> >>>> >>>> >> >> >>
