Sorry, it was too early in the morning... this works for me:

diff --git a/e2fsck/util.c b/e2fsck/util.c
index 56c6b35..ca92d9e 100644
--- a/e2fsck/util.c
+++ b/e2fsck/util.c
@@ -112,7 +112,7 @@ int e2fsck_strnlen(const char * s, int count)
 #ifndef HAVE_CONIO_H
 static int read_a_char(void)
 {
-       char    c;
+       char    buf[256];
        int     r;
        int     fail = 0;

@@ -121,10 +121,10 @@ static int read_a_char(void)
                    (e2fsck_global_ctx->flags & E2F_FLAG_CANCEL)) {
                        return 3;
                }
-               r = read(0, &c, 1);
+               r = read(0, buf, sizeof(buf));
                if (r == 1)
-                       return c;
-               if (fail++ > 100)
+                       return buf[0];
+               if (r == 0 && fail++ > 100)
                        break;
        }
        return EOF;

This increments the fail counter only if nothing was read from stdin, because the function break will cause fsck to continue. In this case fsck canceling will be more correct - same behavior when fsck is called without tty as stdin.

Correct recognition of escape sequences will be better, but is this possible w/o ncurses/slang use?

Reading many characters from stdin and throwing them away if more than 1 was read is IMHO safe (from my limited knowledge point of view) as long as the stdin is forced to be tty. I didn't found any fsck switch allowing other use.

Also this will not fix this issue when getch() (i.e. conio.h is present) is used, if it has this issue at all. I don't know if this is issue for any architecture supported by Debian.

I am not a C programmer, just sending patch suggestion (may be buggy) because I want to see fixed this. I do not agree with normal severity of this bug as this could easily cause severe data damage if user is not aware about this behavior. I suggest at least serious severity, to be caught by apt-listbugs in default configuration.

Cheers,
Petr





--
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]

Reply via email to