On Sun, Aug 07, 2022 at 11:10:22AM +0200, Omar Polo wrote:
> I'm not sure httpd(8) handles correctly when the fastcgi application
> (e.g. slowcgi) closes the connection prematurely.
>
> To verify it, I'm playing with three simple CGI scripts running under
> slowcgi with a very low timeout (-t2). The scripts test "hangs"
> (which slowcgi turns into hard disconnections) in different places:
>
> /* slow.c -- hang before replying */
> sleep(10);
> puts("Content-Type: text/plain\n");
> puts("hello, world");
> return 0;
>
> /* slow2.c -- hang during headers */
> puts("Content-Type: text/plain");
> fflush(stdout);
> sleep(10);
> puts("");
> puts("hello, world");
> return 0;
>
> /* slow3.c -- hang during the body */
> puts("Content-Type: text/plain\n");
> printf("hello, (wait for it...)");
> fflush(stdout);
> sleep(10);
> puts("world!");
> return 0;
>
> Those sleep calls are meant to hit slowcgi_timeout which abruptly
> closes the connection with httpd(8).
>
> httpd handles the fastcgi termination via server_file_error which
> assumes the request' headers and body were already sent and so goes on
> to set up for the next request (if clt_persist.) This leaves the HTTP
> client hanging and waiting for a response that won't arrive, until
> they give up when the fastcgi reply wasn't complete.
>
> Diff below (first three hunks) addresses this issue by introducing a
> server_fcgi_error callback that handles the "no header" and
> "incomplete data" cases before calling server_file_error. With it,
> the first two scripts become an httpd' "500 error" page and the third
> is correctly closed after the first line.
>
> I'm also bundling another small change for slowcgi to make it send a
> FCGI_END_REQUEST record instead of abruptly closing the request. I
> think it's better to let the fastcgi server know that the request is
> over, but the slowcgi/httpd combo works even without it.
>
>
> diff refs/heads/wip refs/heads/fastcgi
> commit - 3aac3f8529325cbd58806247542caabd23befb6e
> commit + cab6b39fe822d105a2dc852f2435693a6eff834f
> blob - 381fade2924c4b5cea77cd9cd6500e75d4d59257
> blob + b6541b7c68235ac1dfc5a9d0243db988e5932a7f
> --- usr.sbin/httpd/server_fcgi.c
> +++ usr.sbin/httpd/server_fcgi.c
> @@ -77,6 +77,7 @@ struct server_fcgi_param {
> };
>
> int server_fcgi_header(struct client *, unsigned int);
> +void server_fcgi_error(struct bufferevent *, short, void *);
> void server_fcgi_read(struct bufferevent *, void *);
> int server_fcgi_writeheader(struct client *, struct kv *, void *);
> int server_fcgi_writechunk(struct client *);
> @@ -133,7 +134,7 @@ server_fcgi(struct httpd *env, struct client *clt)
>
> clt->clt_srvbev_throttled = 0;
> clt->clt_srvbev = bufferevent_new(fd, server_fcgi_read,
> - NULL, server_file_error, clt);
> + NULL, server_fcgi_error, clt);
> if (clt->clt_srvbev == NULL) {
> errstr = "failed to allocate fcgi buffer event";
> goto fail;
> @@ -482,6 +483,23 @@ fcgi_add_param(struct server_fcgi_param *p, const char
> }
>
> void
> +server_fcgi_error(struct bufferevent *bev, short error, void *arg)
> +{
> + struct client *clt = arg;
> +
> + if ((error & EVBUFFER_EOF) && !clt->clt_fcgi.headersdone) {
> + server_abort_http(clt, 500, "malformed or no headers");
> + return;
> + }
> +
> + /* send the end marker if not already */
> + if (clt->clt_fcgi.chunked && !clt->clt_fcgi.end++)
> + server_bufferevent_print(clt, "0\r\n\r\n");
> +
> + server_file_error(bev, error, arg);
> +}
> +
> +void
> server_fcgi_read(struct bufferevent *bev, void *arg)
> {
> uint8_t buf[FCGI_RECORD_SIZE];
I already gave an OK for this. It still is OK.
> blob - ddf83f965d0e6a99ada695694bea77b775bae2aa
> blob + 1d577ba63efca388ca3644d1a52d9b3d9f246014
> --- usr.sbin/slowcgi/slowcgi.c
> +++ usr.sbin/slowcgi/slowcgi.c
> @@ -515,7 +515,34 @@ slowcgi_accept(int fd, short events, void *arg)
> void
> slowcgi_timeout(int fd, short events, void *arg)
> {
> - cleanup_request((struct request*) arg);
> + struct request *c;
> +
> + c = arg;
> +
> + if (c->script_flags & (STDOUT_DONE | STDERR_DONE | SCRIPT_DONE))
Is this the correct check? If we hit the timeout we want to close all
pipes to the CGI process and hope it dies because of that. Now one of
STDOUT_DONE or STDERR_DONE may have happened but the script is still
running.
I think it would be better to use:
if (c->script_flags == (STDOUT_DONE | STDERR_DONE | SCRIPT_DONE))
return;
here.
Should slowcgi kill the command if SCRIPT_DONE is not set?
> + return;
> +
> + if (!c->stdout_fd_closed) {
> + c->stdout_fd_closed = 1;
> + close(EVENT_FD(&c->script_ev));
> + event_del(&c->script_ev);
> + }
> +
> + if (!c->stderr_fd_closed) {
> + c->stderr_fd_closed = 1;
> + close(EVENT_FD(&c->script_err_ev));
> + event_del(&c->script_err_ev);
> + }
> +
> + if (!c->stdin_fd_closed) {
> + c->stdin_fd_closed = 1;
> + close(EVENT_FD(&c->script_stdin_ev));
> + event_del(&c->script_stdin_ev);
> + }
> +
> + c->script_status = SIGALRM;
> + c->script_flags = STDOUT_DONE | STDERR_DONE | SCRIPT_DONE;
> + create_end_record(c);
> }
>
> void
>
Apart from that I like this a lot. slowcgi is a bit sloppy at implementing
the fcgi protocol properly. This is one bit of the puzzle for sure.
--
:wq Claudio