> On Dec 31, 2018, at 11:38, Jeffrey Walton <noloa...@gmail.com> wrote: > > On Mon, Dec 31, 2018 at 2:16 PM Vincent Lefevre <vinc...@vinc17.net > <mailto:vinc...@vinc17.net>> wrote: >> >> On 2018-12-31 13:03:27 -0500, Jeffrey Walton wrote: > >>> This is the first point of unwanted data egress. Sensitive information >>> like user passwords and keys can be written to the filesystem >>> unprotected. >> >> This can occur with any program, even not using asserts, e.g. due to >> a segmentation fault (which may happen as a consequence of not using >> asserts, with possibly worse consequences). >> >> If you don't want a core file, then you can instruct the kernel not >> to write a core file. See getrlimit. > > To play devil's advocate again, that strategy requires every user to > have the knowledge. If RTFM was going to worked, It should have > happened in the last 50 years or so. > > Refusing to process the data and failing the API call requires no > knowledge on the user's part.
I don’t have a dog in this fight, but you referenced high integrity software (though I guess what is meant is confidentiality rather than integrity in this case) and then say we cannot rely on people to RTFM. While I don’t doubt there are users who will fail to understand the consequences of having core dumps enabled, this is just one of many ways to leak information in a non-hardened system. E.g. you can attach to the victim process with gdb/ptrace and simply read its memory, if the sysadmin has not blocked this with Yama or similar. Could you elaborate on the threat model you have in mind? _______________________________________________ gmp-bugs mailing list gmp-bugs@gmplib.org https://gmplib.org/mailman/listinfo/gmp-bugs