Hi guys,
I have added a Timer interceptor at two places in the chain to measure
the time consummed in the backend and in the full chain. I got some
interesting results :
- A search does a lookup
- A lookup costs 62 micro second on the backend, 85 microseconds when
traversing the full chain
- A search costs 24 microseconds on the backend, 102 microseconds when
traversing the full chain
- it takes 35.563 seconds to do 100 000 searches, so a single search costs
If we consider that a search does a lookup, we should have something
around 85 + 24 = 109 microseconds for a global search instead of 102
microseconds, but the timing accuracy may be the cause for such a
difference.
Anyway, that mean we can do around 9200 search request per second on the
server, not including the network layer (request encoding + decoding,
response encoding + decoding), which adds an extra 245 microseconds to
the server's delay (it costs 355 microseconds to do a search through the
network).
At this point, as we measure the full time on the client side, we can't
determinate the number of search per second the server can provide, but
it's definitively something between 9000 and 2800 per seconds,
considering that the server has to decode the request and encode the
response.
What we now have to analyze is the reason why we do a lookup for each
search request, as if we can spare this call, we may perfectly divide
the processing time by two, and also see where we can speedup the server.
More to come later.
--
Regards,
Cordialement,
Emmanuel Lécharny
www.nextury.com