Hi Venkat, As Kasun mentioned you have done great job and thanks for all the hard work you did. We are looking forward to work with you in future as well.
Thanks On Tue, Aug 23, 2016 at 2:05 AM, Kasun Indrasiri <[email protected]> wrote: > > > On Thu, Aug 18, 2016 at 9:09 AM, Venkat Raman <[email protected]> > wrote: > >> Hi All, >> >> This project has been added to WSO2 Incubator >> <https://github.com/wso2-incubator/HTTP-Load-balancer> and I've been >> given membership to the same :D >> >> Would like to thank you all for your kind support and guidance throughout >> the course of this program. I learned a lot and my programming and design >> skills have improved greatly. I've also started blogging and I am part of >> VMware Open Source team !! >> >> All these were possible because of working in this wonderful project and >> amazing team ! :) >> >> Kindly have a look at my blog posts. Would love to hear your suggestions. >> >> 1) Architecture : https://venkat2811.blogspot.in >> /2016/08/http-load-balancer-on-top-of-wso2.html >> 2) Performance: https://venkat2811.blogspot.in >> /2016/08/http-load-balancer-on-top-of-wso2_18.html >> >> I'll also be completing my final evaluations today. @IsuruR and Kasun, >> It would be great if both of you could share your feedback with me also :) >> > > Great job Venkat. You showed great passion and commitment througout the > project. It was a great pleassure to work with you and we would be more > than happy to work with you in the future. Please keep in touch! > > Thanks, > Kasun > > > >> Also have a look at this status updates and todo list >> <https://docs.google.com/document/d/1kEk706KkwjK3gFpjx56vNwAKzVQpa-G52MHIC_I7-0Y/edit> >> document which also available in README.md of the project. As mentioned >> earlier will be working with you to make this project production ready. >> >> Once again thank you very much for giving me this wonderful opportunity !! >> >> >> >> *Thanks,* >> *Venkat.* >> >> On Wed, Aug 17, 2016 at 9:09 PM, Venkat Raman <[email protected]> >> wrote: >> >>> Hi Isuru, >>> >>> I've written two blog posts: >>> >>> 1) One with Project repository and discussion on High level >>> Architecture, Engine Architecture, message flow and current features. >>> 2) Other on performance benchmarks. >>> >>> It would be great if you could create Incubation repo so that I can >>> update the URL in the blog post. >>> >>> >>> >>> >>> *Thanks,* >>> *Venkat.* >>> >>> On Wed, Aug 17, 2016 at 8:15 PM, Venkat Raman <[email protected]> >>> wrote: >>> >>>> Hi Kasun, >>>> >>>> PFA benchmarks done with OneOutboundEndpoint. >>>> >>>> >>>> >>>> >>>> *Thanks,* >>>> *Venkat.* >>>> >>>> On Mon, Aug 15, 2016 at 4:41 PM, Venkat Raman <[email protected]> >>>> wrote: >>>> >>>>> Hi Isuru, >>>>> >>>>> Please find 12th week's progress. >>>>> >>>>> 1) Did performance benchmarks based on suggestions from code review. >>>>> 2) Did performance benchmarks on carbon gateway framework. >>>>> 3) Did JFR on gateway framework and LB. >>>>> 4) Found that more the no of Outbound Endpoints, lesser the >>>>> performance. >>>>> 5) Currently writing blog post. >>>>> 6) Graphs added to repo. Kindly find it here >>>>> <https://github.com/Venkat2811/product-http-load-balancer/tree/master/performance-benchmark> >>>>> and here <https://github.com/Venkat2811/product-http-load-balancer>. >>>>> >>>>> >>>>> >>>>> >>>>> *Thanks,* >>>>> *Venkat.* >>>>> >>>>> On Thu, Aug 11, 2016 at 11:13 PM, Venkat Raman <[email protected]> >>>>> wrote: >>>>> >>>>>> Hi Isuru, >>>>>> >>>>>> In previous email, I've attached JFR for >>>>>> >>>>>> 1) LB (with healthcheck and timeout) with 5 OutboundEPs >>>>>> 2) i-server with 1 OutboundEP >>>>>> >>>>>> In this mail, I'm attaching >>>>>> >>>>>> 1) A very simple light weight mediator written to do load-balancing >>>>>> in round-robin fashion of 5 OutboundEPs >>>>>> >>>>>> >>>>>> >>>>>> *Thanks,* >>>>>> *Venkat.* >>>>>> >>>>>> On Thu, Aug 11, 2016 at 10:21 PM, Venkat Raman <[email protected]> >>>>>> wrote: >>>>>> >>>>>>> Hi Kasun, >>>>>>> >>>>>>> Yes. I looked into JFR. With one endpoint disruptor is the most used. >>>>>>> >>>>>>> With 5 endpoints, HashMap is the most used. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *Thanks,* >>>>>>> *Venkat.* >>>>>>> >>>>>>> On Thu, Aug 11, 2016 at 10:12 PM, Kasun Indrasiri <[email protected]> >>>>>>> wrote: >>>>>>> >>>>>>>> Venkat.. please check whether this is bounded by the contention in >>>>>>>> a map or something... may be that's why it slows down when we have >>>>>>>> multiple >>>>>>>> endpoints. >>>>>>>> >>>>>>>> On Thu, Aug 11, 2016 at 2:43 AM, Venkat Raman <[email protected] >>>>>>>> > wrote: >>>>>>>> >>>>>>>>> Hi Isuru, >>>>>>>>> >>>>>>>>> Please find the attached bench mark results that were done today. >>>>>>>>> I've used only one OutboundEP for Nginx, LB and i-server. LB with >>>>>>>>> HealthCheck and Timeout features is performing close to i-server. So >>>>>>>>> based >>>>>>>>> on this.. Is number of connections causing overhead ? >>>>>>>>> >>>>>>>>> In previous tests also, only one OutboundEP was used for >>>>>>>>> benchmarking i-server against 5 OutboundEPs for Nginx and GW-LB. >>>>>>>>> >>>>>>>>> Even JFR results of i-server and LB are similar. Contentions and >>>>>>>>> Latency occurring in LB are occurring in LB also. >>>>>>>>> >>>>>>>>> Also please find the attached JFR files. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> *Thanks,* >>>>>>>>> *Venkat.* >>>>>>>>> >>>>>>>>> On Wed, Aug 10, 2016 at 2:09 PM, Venkat Raman < >>>>>>>>> [email protected]> wrote: >>>>>>>>> >>>>>>>>>> Hi Isuru, >>>>>>>>>> >>>>>>>>>> Please find the attached results with and without disruptor >>>>>>>>>> enablement. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Thanks,* >>>>>>>>>> *Venkat.* >>>>>>>>>> >>>>>>>>>> On Wed, Aug 10, 2016 at 9:23 AM, Venkat Raman < >>>>>>>>>> [email protected]> wrote: >>>>>>>>>> >>>>>>>>>>> Sure Kasun. Will try to find it. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Venkat. >>>>>>>>>>> On Aug 10, 2016 5:30 AM, "Kasun Indrasiri" <[email protected]> >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Venkat, >>>>>>>>>>>> >>>>>>>>>>>> The drop of the performance of LB compared to GW Framework >>>>>>>>>>>> seems to be way too much. I think we can't afford to lose nearly >>>>>>>>>>>> 50% of >>>>>>>>>>>> throughput because of the LB components. Let's try to identify the >>>>>>>>>>>> bottlenecks related to LB code. >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Aug 9, 2016 at 4:18 PM, Venkat Raman < >>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Isuru & Kasun, >>>>>>>>>>>>> >>>>>>>>>>>>> Please find the attached results document. As discussed, I >>>>>>>>>>>>> created a new VM for bench-marking. >>>>>>>>>>>>> >>>>>>>>>>>>> It seems like TPS of 20,000 (from yesterday's results) even at >>>>>>>>>>>>> higher concurrency level is not accurate. Sorry for the >>>>>>>>>>>>> confusion caused. >>>>>>>>>>>>> Most of the time endpoints were marked as unHealthy and direct >>>>>>>>>>>>> error response is returned by LB Mediator which resulted in high >>>>>>>>>>>>> TPS. I >>>>>>>>>>>>> tried multiple bench-marking from yesterday and I was never able >>>>>>>>>>>>> to achieve >>>>>>>>>>>>> that result. >>>>>>>>>>>>> >>>>>>>>>>>>> In this test, to avoid such cases, higher no of >>>>>>>>>>>>> unHealthyRetries count has been configured. >>>>>>>>>>>>> >>>>>>>>>>>>> Also, I've bench-marked performance of GW-FMW using i-server >>>>>>>>>>>>> with this simple >>>>>>>>>>>>> <https://github.com/Venkat2811/product-http-load-balancer/blob/master/performance-benchmark/gw-framework/router.iflow> >>>>>>>>>>>>> configuration. It is a simple route even without if-else >>>>>>>>>>>>> conditions. >>>>>>>>>>>>> >>>>>>>>>>>>> As you can see, It is performing 2X faster than LB. >>>>>>>>>>>>> >>>>>>>>>>>>> Next steps would be to do memory benchmark and plot graphs >>>>>>>>>>>>> with these values. Once repo, documentation and blog is ready, >>>>>>>>>>>>> I'll be >>>>>>>>>>>>> using JFR to identify bottle necks and on fine-tuning LB's >>>>>>>>>>>>> performance. >>>>>>>>>>>>> >>>>>>>>>>>>> Will be looking forward to hear your feedback on this. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Aug 9, 2016 at 1:13 AM, Venkat Raman < >>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Sure Kasun, I'll do a perf-benchmark between iserver and LB >>>>>>>>>>>>>> in a new VM as discussed. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> Venkat. >>>>>>>>>>>>>> On Aug 9, 2016 12:55 AM, "Kasun Indrasiri" <[email protected]> >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> - Compare GW framework perf vs LB (need to identify if any >>>>>>>>>>>>>>> perf impact from the LB related code). >>>>>>>>>>>>>>> - Identify the reason for the apparent perf bottleneck with >>>>>>>>>>>>>>> high concurrency. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Mon, Aug 8, 2016 at 10:55 AM, Venkat Raman < >>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Kasun, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Please find the latest results after Saturday's code review. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Mon, Aug 8, 2016 at 10:06 AM, Venkat Raman < >>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi Isuru, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Good morning. Please find 11th week's progress. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 1) Had code reviews and made few suggested corrections. >>>>>>>>>>>>>>>>> 2) Did some groundwork for using JFR >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Will be continuing to work on performance tuning. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> @Kasun - Tomorrow is August 9th. Can we have demo ? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Sat, Aug 6, 2016 at 10:16 PM, Venkat Raman < >>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hi Isuru, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Here are the findings from today's review: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 1) Change CallMediatorMap from ConcurrentHashMap to >>>>>>>>>>>>>>>>>> HashMap >>>>>>>>>>>>>>>>>> 2) Remove unnecessary Synchronized block while checking >>>>>>>>>>>>>>>>>> areAllEndpointsUnhealthy() >>>>>>>>>>>>>>>>>> 3) Rename LoadBalancerCallMediator to >>>>>>>>>>>>>>>>>> LBEndpointsCallMediator >>>>>>>>>>>>>>>>>> 4) Give a PR by adding getUri() method to >>>>>>>>>>>>>>>>>> gateway-framework >>>>>>>>>>>>>>>>>> 5) Use JavaFlightRecorder while doing benchmark to >>>>>>>>>>>>>>>>>> identify bottlenecks >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Fri, Aug 5, 2016 at 1:43 PM, Venkat Raman < >>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hi Isuru, >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Please find the attached latest bench-mark without >>>>>>>>>>>>>>>>>>> synchronization, callBackpool, healthcheck. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Throughput is just 1000 times faster than my current >>>>>>>>>>>>>>>>>>> implementation. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> It is drastically falling because of some other reason. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Fri, Aug 5, 2016 at 10:09 AM, Venkat Raman < >>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Hi IsuruU, >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> FYI >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> ---------- Forwarded message ---------- >>>>>>>>>>>>>>>>>>>> From: Venkat Raman <[email protected]> >>>>>>>>>>>>>>>>>>>> Date: Thu, Aug 4, 2016 at 10:39 AM >>>>>>>>>>>>>>>>>>>> Subject: Re: GSoC Project: HTTP Load Balancer on Top of >>>>>>>>>>>>>>>>>>>> WSO2 Gateway Discussion >>>>>>>>>>>>>>>>>>>> To: Isuru Ranawaka <[email protected]>, Kasun Indrasiri < >>>>>>>>>>>>>>>>>>>> [email protected]> >>>>>>>>>>>>>>>>>>>> Cc: DEV <[email protected]>, Senduran Balasubramaniyam < >>>>>>>>>>>>>>>>>>>> [email protected]> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Hi Isuru & Kasun, >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Please find the attached result document >>>>>>>>>>>>>>>>>>>> (raw-engine-transport.xlsx). I've done test with >>>>>>>>>>>>>>>>>>>> raw-engine-transport >>>>>>>>>>>>>>>>>>>> without any BE. It is performing great and is close to >>>>>>>>>>>>>>>>>>>> Netty based BE !! >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Problem is with LB only. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> My guess is that CallbackPool (using concurrent >>>>>>>>>>>>>>>>>>>> HashMap) that we are using to determine timeout is the >>>>>>>>>>>>>>>>>>>> bottle neck. I'll >>>>>>>>>>>>>>>>>>>> disable Callback pool and do bench-mark and update you on >>>>>>>>>>>>>>>>>>>> that. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Thu, Aug 4, 2016 at 9:48 AM, Venkat Raman < >>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Okay Isuru. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Thu, Aug 4, 2016 at 9:42 AM, Isuru Ranawaka < >>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Hi venkat, >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Yes we can. lets have a call today around 9.30 p.m >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Thu, Aug 4, 2016 at 9:34 AM, Venkat Raman < >>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Hi Isuru, >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Good morning. Yesterday night, I spoke with Kasun >>>>>>>>>>>>>>>>>>>>>>> regarding the latest update on bench-mark results. Even >>>>>>>>>>>>>>>>>>>>>>> without any locking >>>>>>>>>>>>>>>>>>>>>>> performance is not good after concurrency of 5000. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> As you have done bench-mark till concurrency of >>>>>>>>>>>>>>>>>>>>>>> 3000, we both would like to do bench-marking on raw >>>>>>>>>>>>>>>>>>>>>>> carbon-transport upto >>>>>>>>>>>>>>>>>>>>>>> concurrency of 10,000 and 1,00,000 requests so that we >>>>>>>>>>>>>>>>>>>>>>> get an idea on this. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> How do we do that ? Will a simple response from >>>>>>>>>>>>>>>>>>>>>>> engine suffice ? Can I use LB to send simple response >>>>>>>>>>>>>>>>>>>>>>> directly without >>>>>>>>>>>>>>>>>>>>>>> doing any mediation ? >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Thu, Aug 4, 2016 at 12:19 AM, Venkat Raman < >>>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Hi Isuru, >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Please find the attached bench-mark results. As >>>>>>>>>>>>>>>>>>>>>>>> discussed, I've disabled health-checking and removed >>>>>>>>>>>>>>>>>>>>>>>> synchronized block >>>>>>>>>>>>>>>>>>>>>>>> and used atomic Integer in one test and also did a >>>>>>>>>>>>>>>>>>>>>>>> test without any kind of >>>>>>>>>>>>>>>>>>>>>>>> lock or use of atomic integers. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Throughput and latency results are positive. But, >>>>>>>>>>>>>>>>>>>>>>>> after concurrency level of 5000 it is not that good. >>>>>>>>>>>>>>>>>>>>>>>> So even If we use >>>>>>>>>>>>>>>>>>>>>>>> read-write lock or stamped lock, we will get >>>>>>>>>>>>>>>>>>>>>>>> performance little performance >>>>>>>>>>>>>>>>>>>>>>>> gain only. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> I feel that If we can do bench-mark with >>>>>>>>>>>>>>>>>>>>>>>> integration-server upto 10000 concurrent connections >>>>>>>>>>>>>>>>>>>>>>>> we'll get a better >>>>>>>>>>>>>>>>>>>>>>>> idea. Is that okay ? >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Tue, Aug 2, 2016 at 9:39 PM, Venkat Raman < >>>>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Hi Isuru & Kasun, >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Please find the findings from today's code review. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 1) Locking in getNextLBOutboundEndpoint() method >>>>>>>>>>>>>>>>>>>>>>>>> in algorithm implementation is causing over-head. We >>>>>>>>>>>>>>>>>>>>>>>>> have to find a way to >>>>>>>>>>>>>>>>>>>>>>>>> efficiently handle communication between threads to >>>>>>>>>>>>>>>>>>>>>>>>> reduce locking overhead. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 2) Code repo freeze by August 15th for the sake of >>>>>>>>>>>>>>>>>>>>>>>>> GSoC. If we can find a way to overcome locking >>>>>>>>>>>>>>>>>>>>>>>>> over-head before August >>>>>>>>>>>>>>>>>>>>>>>>> 15th that changes will be added to code repo. >>>>>>>>>>>>>>>>>>>>>>>>> Otherwise it will be added >>>>>>>>>>>>>>>>>>>>>>>>> after GSoC. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 3) TPS, Latency and Memory graphs to be added. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 4) Blog post and PDF documentation. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Mon, Aug 1, 2016 at 9:25 AM, Venkat Raman < >>>>>>>>>>>>>>>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Hi Isuru, >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Good morning. Please find 10th week's progress. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> 1) Had discussion with Kasun. >>>>>>>>>>>>>>>>>>>>>>>>>> 2) As suggested, did performance bench-marking >>>>>>>>>>>>>>>>>>>>>>>>>> using Netty BE, and it turns out that our LB is >>>>>>>>>>>>>>>>>>>>>>>>>> beating Nginx till >>>>>>>>>>>>>>>>>>>>>>>>>> concurrency level of 6000 after which it is not >>>>>>>>>>>>>>>>>>>>>>>>>> performing well. >>>>>>>>>>>>>>>>>>>>>>>>>> I've attached the results. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> I've started a new thread as Conversation >>>>>>>>>>>>>>>>>>>>>>>>>> arrangement is not good in previous one. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> It would be great if we can have a code review >>>>>>>>>>>>>>>>>>>>>>>>>> Isuru. Based on your feedback I'll be abe to make >>>>>>>>>>>>>>>>>>>>>>>>>> changes and we can do >>>>>>>>>>>>>>>>>>>>>>>>>> bench-marking again. Can we do it today 9:30 PM ? >>>>>>>>>>>>>>>>>>>>>>>>>> We have only 2 full >>>>>>>>>>>>>>>>>>>>>>>>>> weeks more. The last week will be for documentation. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> *Thanks,* >>>>>>>>>>>>>>>>>>>>>>>>>> *Venkat.* >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Best Regards >>>>>>>>>>>>>>>>>>>>>> Isuru Ranawaka >>>>>>>>>>>>>>>>>>>>>> M: +94714629880 >>>>>>>>>>>>>>>>>>>>>> Blog : http://isurur.blogspot.com/ >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Kasun Indrasiri >>>>>>>>>>>>>>> Director, Integration Technologies >>>>>>>>>>>>>>> WSO2, Inc.; http://wso2.com >>>>>>>>>>>>>>> lean.enterprise.middleware >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> cell: +1 650 450 2293 >>>>>>>>>>>>>>> Blog : http://kasunpanorama.blogspot.com/ >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Kasun Indrasiri >>>>>>>>>>>> Director, Integration Technologies >>>>>>>>>>>> WSO2, Inc.; http://wso2.com >>>>>>>>>>>> lean.enterprise.middleware >>>>>>>>>>>> >>>>>>>>>>>> cell: +1 650 450 2293 >>>>>>>>>>>> Blog : http://kasunpanorama.blogspot.com/ >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Kasun Indrasiri >>>>>>>> Director, Integration Technologies >>>>>>>> WSO2, Inc.; http://wso2.com >>>>>>>> lean.enterprise.middleware >>>>>>>> >>>>>>>> cell: +1 650 450 2293 >>>>>>>> Blog : http://kasunpanorama.blogspot.com/ >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > > > -- > Kasun Indrasiri > Director, Integration Technologies > WSO2, Inc.; http://wso2.com > lean.enterprise.middleware > > cell: +1 650 450 2293 > Blog : http://kasunpanorama.blogspot.com/ > -- Best Regards Isuru Ranawaka M: +94714629880 Blog : http://isurur.blogspot.com/
_______________________________________________ Dev mailing list [email protected] http://wso2.org/cgi-bin/mailman/listinfo/dev
