>The only non-blocking functions available to use is via "socket class". Is
it too crazy if we use "socket object" to do network IO to a process we
write that acts as a proxy to do disk operations?

Feels stupid that I mentioned this, if we're going this far, just use SPOE
instead.

On Mon, Oct 17, 2022 at 12:01 PM Abhijeet Rastogi <[email protected]>
wrote:

> Hi Aurelien,
>
> I really appreciate the response. This confirms, that "everything in
> runtime mode" uses the same thread pool as HTTP workers. This also says
> that the onlytime we can use "blocking IO" is in "initialization mode", ie,
> at HAproxy startup/reloads.
>
> I shared the ACL example, but this is what I was really doing:-
>
> * Periodically, every 5 seconds, read the content of a file on disk via
> io.read() in a "lua task", save it in a global variable.
> * In a "lua service", return the value of global variable as body in
> response.
>
> Considering what we've already established in this email, it looks like
> HAProxy has no way to serve a "static file" which changes after HAproxy has
> been initialized. @Aurelien, please confirm! This seems like a  big
> limitation, there has to be a workaround. Is it just acceptable in HAproxy
> world to "reload instead", every time a file is modified on disk?
>
> The only non-blocking functions available to use is via "socket class". Is
> it too crazy if we use "socket object" to do network IO to a process we
> write that acts as a proxy to do disk operations?
>
> Thanks,
> Abhijeet
>
> On Mon, Oct 17, 2022 at 1:13 AM Aurelien DARRAGON <[email protected]>
> wrote:
>
>> Hi,
>>
>> it feels like that shouldn't be true for "background tasks" as it is
>> mentioned that they run in separate threads
>>
>> That's not 100% true. Lua tasks do run concurrently with the rest of
>> HAProxy processing.
>> But they don't run in separate "threads": they run in separate haproxy
>> tasks.
>> What this mean is that lua task must follow haproxy task constraints:
>>
>> Within HAProxy, a task is implemented as a function that gets called from
>> the scheduler.
>> If the function needs to wait for some event/data to continue processing,
>> it must give the hand back and might be rescheduled later to continue
>> processing.
>> If the function fails to do so (blocks for too long), haproxy watchdog
>> will suspect that a thread is stuck, and will cause the whole process to
>> crash unconditionally.
>> That is because of HAProxy event-driven nature.
>>
>> From lua task:
>>
>>    - core.* methods do respect this constraint
>>    - io.* methods don't
>>
>>
>> This example
>> <https://github.com/zareenc/haproxy-lua-examples/blob/f5853013087642c0ed34ed47bea4c6efbf96dd29/lua_scripts/background_thread.lua>
>>  was
>> linked on the arpalert.org website but it seems to use *"io.* methods in
>> runtime"*.
>>
>> To demonstrate this, I used the example you provided (lua task that uses
>> io.* methods),
>> and I modified it to make sure IO calls were blocking (ie: trying to read
>> from /dev/zero):
>> https://gist.github.com/Darlelet/5768628e40c0d960fce8c20649877e12
>>
>> It does not take long for the watchdog to trigger and crash the process.
>>
>> Thus, if you were to use io.* from Lua task, you could be fine for some
>> time, but as long
>> as some io takes too long due to io.* blocking nature (and it will
>> happen), you
>> would cause the whole process to fail.
>>
>> I would highly recommend sticking to core.add_acl/core.del_acl for acl
>> management from lua task.
>> Maybe you could also leverage haproxy stats socket directly from the task
>> using core.tcp facility if
>> core.add_acl/core.del_acl is not what you're looking for.
>>
>> Regards,
>> Aurelien
>>
>
>

-- 
Cheers,
Abhijeet (https://abhi.host)

Reply via email to