Thanks, Juha.
You are correct. That feature request is directly relevant to this thread.
I will upvote the feature request, and add a link back to this thread.
The proposed workaround of using async workers is the one I opted to use.
___
Kamailio (SER)
This issue may be related to the question:
https://github.com/kamailio/kamailio/issues/1107
-- Juha
___
Kamailio (SER) - Users Mailing List
sr-users@lists.kamailio.org
https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
Thanks.
Regarding UDP, that all makes sense.
For my short-term needs, to minimize risk and infrastructure changes, it looks
like the ASYNC module will work.
Here's the basic outline of what I'm doing:
async_workers=8
route
{
route(CHECK_IS_EXPENSIVE_OPERATION);
On Fri, Feb 23, 2018 at 07:17:48PM +, Cody Herzog wrote:
> That makes sense, but is unfortunately not an option for me due to
> strict security requirements. I need to use TLS on the whole path.
Personally, I would work around that requirement, either by using a
compliant private backplane/b
>A common design which avoids this is to use TCP at the client edge and
>UDP inside the network core. This is one of the reasons why TCP is not
>optimal for use inside the core.
That makes sense, but is unfortunately not an option for me due to strict
security requirements.
I need to use TLS on t
A common design which avoids this is to use TCP at the client edge and
UDP inside the network core. This is one of the reasons why TCP is not
optimal for use inside the core.
--
Alex Balashov | Principal | Evariste Systems LLC
Tel: +1-706-510-6800 / +1-800-250-5920 (toll-free)
Web: http://www.e
Thanks very much for the quick replies, Alex and Brandon.
The main reason I'm hitting a bottleneck is because my architecture is not
optimal.
I have a number of edge proxies which communicate with all my clients.
The clients are usually distributed pretty evenly across all the proxies.
On those
Cody,
Kamailio should receive from normal TCP /
Kernel stack handoff - you may be able to do some tuning with sysctl -
however one alternate suggestion that could help spread load on actual
Kamailio TCP workers is by firing up additional workers on alternate ports
but this still would not ensu
Hi,
As with UDP workers, the kernel divides incoming TCP in a semi-random
way at a trough of TCP workers all calling accept(). So, the
distribution is indeed at the connection level rather than the message
level, and so long as the connection persists, messages go to the same
worker.
In theory, a