Stuart Henderson wrote:
> On 2023-09-13, Eric Wong <[email protected]> wrote:
> > Theo de Raadt wrote:
> >> There isn't a way.  And I will argue there shouldn't be a way to do that.
> >> I don't see a need to invent such a scheme for one user, when half a 
> >> century
> >> of Unix has no way to do this.
> >> Sorry.
> >
> > I have a different use case than Johannes but looking for a similar feature.
> > Maybe I can convince you :>
> >
> > For background, I develop multi-process daemons and OpenBSD is
> > the only platform I'm noticing segfaults on[1].
> >
> > The lack of PIDs in the core filenames means they can get
> > clobbered in parallel scenarios and I lose useful information.
> >
> > Sometimes, daemons run in / (or another unwritable directory);
> > and the core dump can't get written, at all.
> 
> If the daemons are changing uid, read about kern.nosuidcoredump
> in sysctl(8) (set the sysctl, mkdir /var/crash/progname, and
> it will write to $pid.core).

They aren't, they're all per-user.  I'm seeing core files from a
heavily-parallelized test suite[1].  Some processes can chdir to
/, some stay in their current dir, and some chdir into
short-lived temporary directories.

Thanks.

[1] The good news is the test suite passes; but the lone core dump
    sometimes get tells me it's in the Perl destructor sequence.
    I've been adding `END {}' blocks and explicit undefs but still
    occasionally see a perl.core file after a run.  And even if
    I don't see that file after a run, I wouldn't know if a core
    dump failed in / or a temporary directory.

Reply via email to