>> Furthermore, I am afraid we actually queue a lot of jobs (more than one) when
>> the counter is decreased by one.
>> I think it may be the root problem?
>
> Yes, until the next IKE_SA is checked in packets will be processed.
Do you want that I fill an issue for that?
>
>> The only visible ef
> Furthermore, I am afraid we actually queue a lot of jobs (more than one) when
> the counter is decreased by one.
> I think it may be the root problem?
Yes, until the next IKE_SA is checked in packets will be processed.
> The only visible effect is to set a job limit, but since it is global we
>> I know these settings and they look promising.
>
> Why not use them then?
We do, but we still suffer from DoS attacks that seem trivial to setup.
>> Unfortunately as I said before they seem to be useless since the counter is
>> increased too late in the IKE_SA manager.
>
> Yeah, I noticed th
> I know these settings and they look promising.
Why not use them then?
> Unfortunately as I said before they seem to be useless since the counter is
> increased too late in the IKE_SA manager.
Yeah, I noticed that it's quite late. Since strongSwan calculates the
IKE keys while processing the
>> - is charon.init_limit_job_load the only relevant setting for DoS protection?
>
> No, there are several others. The first is charon.cookie_threshold (and
> charon.dos_protection), which causes COOKIEs to get returned if the
> global number of half-open SAs exceeds the limit, which helps if the
Hi Emeric,
> Questions:
> - why is the this counter increased after the first message has successfully
> been handled from the job queue?
The half-open SA counter is increased whenever an IKE_SA object is
checked into the IKE_SA manager after processing (or initiating) an
IKE_SA_INIT request, an