Hi all, I would like to share some measurements regarding the performance of WSO2 ESB regarding content based routing. Before I post the concrete numbers I would like to introduce my simple test environment.
Server A (Linux, 4 cores, 4 GB RAM) JBoss AS 4.2 running a simple WebService (JBoss WS, JSR 181 POJO-based) Server B (Linux, 4 cores, 4 GB RAM) JBoss AS 4.2 running the same simple WebService as Server A (JBoss WS, JSR 181 POJO-based) Server C (Linux, 4 cores, 4GB RAM) WSO2 ESB 1.5 Client (Windows XP, 2 cores, 3 GB RAM) soapUI 2.0 The WebService exposes one simple test method which "fakes a real service implementation" of a constant time of about 50 ms. The implementation of the test method contains something like that: Thread.sleep(50); InetAddress localMachine = InetAddress.getLocalHost(); return "Hello " + name + " on " + localMachine.getHostName(); where name is a method parameter. A simple SOAP requests looks like that: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:esb="http://service.esb.jamba.de/"> <soapenv:Header> <esb:version>1.0</esb:version> </soapenv:Header> <soapenv:Body> <esb:hello> <arg0>Test</arg0> </esb:hello> </soapenv:Body> </soapenv:Envelope> The version number can be varied during the tests. Based on the version number I used a switch mediator in the WSO2 ESB. Here is an extract of the generated config: <syn:switch xmlns:esb="http://service.esb.jamba.de/" xmlns:ns1="http://org.apache.synapse/xsd" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" source="/child::soapenv:Envelope/child::soapenv:Header/child::esb:versio n"> <syn:case regex="1.0"> <syn:send> <syn:endpoint key="SimpleWebService_V1.0"/> </syn:send> </syn:case> <syn:case regex="1.1"> <syn:send> <syn:endpoint key="SimpleWebService_V1.1"/> </syn:send> </syn:case> </syn:switch> Maybe the XPath expression could be improved. I'm not very familiar with XPath. So far I did no tests with big messages. Will the whole message get parsed if I only need the content from the header? I then created a test case in soapUI which includes two test steps. One request with version 1.0 and one with 1.1. 1) First of all I wanted to test the direct endpoints. So I pointed the first request to Server A and the second request to server B. 2) After that I pointed both request to Server C (WSO2 ESB), which routes half of the requests to Server A and half of the requests to Server B based on the version-info in the soap header. For both scenarios I used three different tests: A) Number of Start Threads 1, Number of End Threads 1, 60 seconds (no concurrency) B) Number of Start Threads 1, Number of End Threads 10, 60 seconds C) Number of Start Threads 10, Number of End Threads 50, 120 seconds And here are the numbers: A1) Direct Access to endpoints --------------------------- avg. response time (ms) 62 min. response time (ms) 56 max. response time (ms) 86 response count 480 responses per second 16 bytes per second 4143 A2) Using WSO2 in the middle avg. response time (ms) 76 min. response time (ms) 60 max. response time (ms) 96 response count 391 responses per second 13 bytes per second 3550 max. CPU-Load 15%/400% B1) Direct Access to endpoints --------------------------- avg. response time (ms) 62 min. response time (ms) 56 max. response time (ms) 84 response count 2535 responses per second 158 bytes per second 4120 B2) Using WSO2 in the middle avg. response time (ms) 78 min. response time (ms) 59 max. response time (ms) 102 response count 2108 responses per second 129 bytes per second 3416 max. CPU-Load 70%/400% C1) Direct Access to endpoints --------------------------- avg. response time (ms) 72 min. response time (ms) 55 max. response time (ms) 251 response count 23092 responses per second 692 bytes per second 3459 C2) Using WSO2 in the middle avg. response time (ms) 77 min. response time (ms) 58 max. response time (ms) 402 response count 21240 responses per second 649 bytes per second 3897 max. CPU-Load 170%/400% >From my point of view this looks not to bad. I'm quite happy with this numbers. The overhead increases with the number of concurrency. Also the CPU load on the ESB rises. So of course we need to cluster the ESB and balance the load between different instances. Is there any support so far? Another requirement regards traceability of requests. I really like the tracing infos of WSO2. I guess they are designed to locate configuration problems. Using the INFO-Level I receive a lot of useful information, of course more than I need. So tracing might be the wrong approach here. What I really would like to have are the following infos: Incoming request: Timestamp, Source-IP, unique requestID (based on selected custom header information) Outgoing request: Timestamp, Destination-IP, unique requestID (based on selected custom header information) This information should go to a database asynchronously. Entries could be grouped using the requested. Any ideas how to achieve this without notably performance degradation? Anyhow here are the numbers with tracing enabled (INFO-Level). A2) Using WSO2 in the middle (tracing INFO) avg. response time (ms) 77 min. response time (ms) 58 max. response time (ms) 87 response count 887 responses per second 15 bytes per second 4032 max. CPU-Load 15%/400% B2) Using WSO2 in the middle with tracing enabled (INFO-Level) avg. response time (ms) 77 min. response time (ms) 57 max. response time (ms) 440 response count 2159 responses per second 130 bytes per second 3416 max. CPU-Load 70%/400% C2) Using WSO2 in the middle with tracing enabled (INFO-Level) avg. response time (ms) 86 min. response time (ms) 59 max. response time (ms) 517 response count 17067 responses per second 590 bytes per second 2950 max. CPU-Load 267%/400% All measuring was taken without any TCP/IP-Stack optimization. I basically did it to see how much "overhead" an ESB will introduce. Regards, Eric -- Eric Hubert Software Architect Associate Director Research & Development _______________________________________________ Esb-java-user mailing list [email protected] http://wso2.org/cgi-bin/mailman/listinfo/esb-java-user
