In cleanup_url_get, fin and s will be closed a second time, so mark them
as invalid after closing them the first time.
Another option might be to remove the fclose/close calls, but since this
happens right before the recursive call, I'm not sure whether this might
run the risk of hitting limits.
Index: fetch.c
===================================================================
RCS file: /var/cvs/src/usr.bin/ftp/fetch.c,v
retrieving revision 1.164
diff -u -p -r1.164 fetch.c
--- fetch.c 25 Sep 2017 11:04:54 -0000 1.164
+++ fetch.c 6 Feb 2018 00:37:17 -0000
@@ -912,10 +912,13 @@ noslash:
*loctail = '\0';
if (verbose)
fprintf(ttyout, "Redirected to %s\n", redirurl);
- if (fin != NULL)
+ if (fin != NULL) {
fclose(fin);
- else if (s != -1)
+ fin = NULL;
+ } else if (s != -1) {
close(s);
+ s = -1;
+ }
rval = url_get(redirurl, proxyenv, savefile, lastfile);
free(redirurl);
goto cleanup_url_get;