Hi Bernd,

am Freitag, 21. März 2025 um 15:05 schrieben Sie:

> The index.md [1] of the repo indicates that the XCOPY 1.7 binary is built 
> with Open Watcom.

> There reportedly was an issue [2] regarding stack usage with the Open Watcom 
> built binaries. 
> Skimming over the source, especially [3], indicates that this may use A LOT 
> of stack space depending on what value MAXPATH is defined to be (it is 260 
> for my Open Watcom installation). That is approaching 2k of stack per 
> recursive call of xcopy_files. So the stack size will likely have to be 
> increased to make it work with a reasonably large directory nesting level. 
> Independent of any possible errors, the 8K mentioned in the commit may simply 
> not be enough. Heap allocation may be considered for these quantities of 
> memory...

I just recently stumbled upon the same issue, from the same author, in MOVE 
where recursion works the same.

where it "works" with a stack setting of 2K.

It works because MOVE was compiled with TC which locates the stack at the 
bottom, so on stack
overflow it wraps around to FFFE, where it hopefully doesn't interfere with DOS 
heap management.

Sort of works, but would not survive any stack checking - which Watcom C does 
and aborts.

should be solved by *global* source and destination path's that are managed 
recursively
similar to

        char *origsource = source + strlen(source);     // point to end of 
source name
        ...

        while (find_next_dir(dir_below))
                {
                strcat(source, "\\");
                strcat(source, "DIRBELOW");

                do_stuff_for _directory_down(source)

               *origsource = '\0';
                }

same problem to be expected in deltree(), COPY /S, and friends.

     
Tom


> 1: https://gitlab.com/FreeDOS/base/xcopy
> 2: 
> https://github.com/FDOS/xcopy/commit/1fd55f7ec29e90d87b801bc7adaa207e8ff66ba7
> 3: 
> https://github.com/FDOS/xcopy/blob/03220f9bec98585802a2c25eac4ac2bd55d2b743/source/xcopy.c#L596

> Bernd


>> Am 21.03.2025 um 14:14 schrieb Jerome Shidel via Freedos-devel 
>> <freedos-devel@lists.sourceforge.net>:
>> 
>> 
>> Hi Eric,
>> 
>>> On Mar 21, 2025, at 6:03 AM, Eric Auer via Freedos-devel 
>>> <freedos-devel@lists.sourceforge.net> wrote:
>>> 
>>> 
>>> Hi! Just guessing, but...
>>> 
>>> Maybe xcopy recursively copies directories? Have you tried to trigger
>>> the bug using a small set of files in a deep directory tree?
>>> 
>>> Eric
>> 
>> Although there are a lot of files on the drive, the directories aren’t 
>> deeply nested. But, some dirs do contain large numbers of files. 
>> 
>> I could not do the entire drive at once without getting the stack overflow.
>> 
>> 
>> But, I was able to do it in pieces. By doing all he directories (Except 
>> FreeDOS\) one at a time. then each of the FreeDOS\ subdirectories 
>> independently. The error did not occur.
>> 
>> But copying the same SD card using xcopy on a different machine has not 
>> previously generated the stack overflow.
>> 
>> Today I will try a couple additional tests. See if it is happening on any of 
>> the other machines. See if it always happens at the same point. See if 
>> adding, moving or removing files or directories effects it. Log the error. 
>> 
>> 
>> 
>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Freedos-devel mailing list
>>> Freedos-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/freedos-devel
>> 
>> 
>> _______________________________________________
>> Freedos-devel mailing list
>> Freedos-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/freedos-devel



> _______________________________________________
> Freedos-devel mailing list
> Freedos-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/freedos-devel





Mit freundlichen Grüßen / with kind regards
Tom Ehlert
+49-15151898538



_______________________________________________
Freedos-devel mailing list
Freedos-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-devel

Reply via email to