Bug#995996: dump: segfaults on long paths

2021-12-29 Thread наб
Control: tag -1 + patch

On Tue, Oct 12, 2021 at 05:40:42PM +1000, Alexander Zangerl wrote:
> sorry, but i disagree. yes, it is a bug. but no, it is not severity
> important: a 2000+ levels deep directory hierarchy has no
> practical relevance whatsoever.
> and a workaround exists: don't create ridiculously deep hierarchies ;-)
I mean, yeah, fair enough; I have this explicitly for testing coreutils.

Below you'll find a patch for 0.4b47-2, replacing the static MAXPATHLEN
path buffer in treescan() with a dynamically allocated one. This fixes
both the original bug (down to directory depths I assume no-one
has achieved before), and the 
  ./[very long path redacted]: name exceeds 4096 char
diagnostic I got when I originally bumped ulimit -s.

It does use asprintf(3) ‒ while an extension,
it's universal across the extant Berkeley and Solaris derivatives.

-x mode still panics since the paths *are* longer than PATH_MAX, but you
can't pass those to syscalls anyway and restructuring around that would
be non-trivial (in this case, your workaround should be applied
instead :P).

Best,
наб
Date: Wed, 29 Dec 2021 16:26:07 +0100
From: наб 
Subject: Don't trust MAXPATHLEN in treescan()

Dynamically allocate the path buffer:
this both allows paths longer than PATH_MAX,
and thins the stack frame considerably
(no longer segfaulting on a Very Deep tree
 on Debian-default 8k ulimit -s)

--- dump-0.4b47.orig/restore/dirs.c
+++ dump-0.4b47/restore/dirs.c
@@ -287,9 +287,8 @@ treescan(char *pname, dump_ino_t ino, lo
 {
struct inotab *itp;
struct direct *dp;
-   int namelen;
off_t bpt;
-   char locname[MAXPATHLEN + 1];
+   char *locname;
 
itp = inotablookup(ino);
if (itp == NULL) {
@@ -308,9 +307,6 @@ treescan(char *pname, dump_ino_t ino, lo
 * begin search through the directory
 * skipping over "." and ".."
 */
-   namelen = snprintf(locname, sizeof(locname), "%s/", pname);
-   if (namelen >= (int)sizeof(locname))
-   namelen = sizeof(locname) - 1;
rst_seekdir(dirp, itp->t_seekpt, itp->t_seekpt);
dp = rst_readdir(dirp); /* "." */
if (dp != NULL && strcmp(dp->d_name, ".") == 0)
@@ -328,15 +324,13 @@ treescan(char *pname, dump_ino_t ino, lo
 * a zero inode signals end of directory
 */
while (dp != NULL) {
-   locname[namelen] = '\0';
-   if (namelen + dp->d_namlen >= (int)sizeof(locname)) {
-   fprintf(stderr, "%s%s: name exceeds %ld char\n",
-   locname, dp->d_name, (long)sizeof(locname) - 1);
-   } else {
-   (void) strncat(locname, dp->d_name, (int)dp->d_namlen);
-   treescan(locname, dp->d_ino, todo);
-   rst_seekdir(dirp, bpt, itp->t_seekpt);
+   if (asprintf(, "%s/%.*s", pname, (int)dp->d_namlen, 
dp->d_name) == -1) {
+   panic("Error: %s\n", strerror(errno));
+   abort();
}
+   treescan(locname, dp->d_ino, todo);
+   free(locname);
+   rst_seekdir(dirp, bpt, itp->t_seekpt);
dp = rst_readdir(dirp);
bpt = rst_telldir(dirp);
}


signature.asc
Description: PGP signature


Bug#995996: dump: segfaults on long paths

2021-10-26 Thread Bernhard Übelacker

Dear Maintainer,
I guess this crash might be a stack exhaustion.

Given the function treescan in dirs.c would allocate
for locname 4096 bytes, and this function is called recursively,
we would fill up a stack of 8MB, which 'ulimit -s' shows at
my system as default.

Therefore another workaround might be to try to increase the
stack size e.g. by "ulimit -s 16384". (Have not tested it.)

And a possible software change could be to allocate the memory for
locname from heap by e.g. malloc instead of stack.
(If such deep hierarchies are being tried to support ...)

Kind regards,
Bernhard


https://sources.debian.org/src/dump/0.4b47-2/restore/dirs.c/#L292



Bug#995996: dump: segfaults on long paths

2021-10-12 Thread Alexander Zangerl
On Sat, 09 Oct 2021 19:06:21 +0200, наб writes:
...
># mkdir -p /tmp/img.d/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a
 2030+ levels deep.
>Picking Severity: important, because this means it's /impossible/ to
>restore (or even view!) a valid dump.

sorry, but i disagree. yes, it is a bug. but no, it is not severity
important: a 2000+ levels deep directory hierarchy has no
practical relevance whatsoever.
and a workaround exists: don't create ridiculously deep hierarchies ;-)

naturally i'll have a look at what's going wrong here, and whether there
is a chance of surviving a pretty hopeless situation like this; don't hold
your breath, however.

regards
az


-- 
Alexander Zangerl + GPG Key 2FCCF66BB963BD5F + http://snafu.priv.at/
I came, I saw, I deleted all your files.


signature.asc
Description: Digital Signature