On Fri, Jun 22, 2018 at 6:45 PM, Steven D'Aprano <st...@pearwood.info> wrote: > On Sat, Jun 23, 2018 at 01:33:59PM +1200, Greg Ewing wrote: >> Chris Angelico wrote: >> >Downside: >> >You can't say "I'm done with this string, destroy it immediately". >> >> Also it would be hard to be sure there wasn't another >> copy of the data somewhere from a time before you >> got around to marking the string as sensitive, e.g. >> in a file buffer. > > Don't let the perfect be the enemy of the good.
That's true, but for security features it's important to have a proper analysis of the threat and when the mitigation will and won't work; otherwise, you don't know whether it's even "good", and you don't know how to educate people on what they need to do to make effective use of it (or where it's not worth bothering). Another issue: I believe it'd be impossible for this proposal to work correctly on implementations with a compacting GC (e.g., PyPy), because with a compacting GC strings might get copied around in memory during their lifetime. And crucially, this might have already happened before the interpreter was told that a particular string object contained sensitive data. I'm guessing this is part of why Java and C# use a separate type. There's a lot of prior art on this in other languages/environments, and a lot of experts who've thought hard about it. Python-{ideas,dev} doesn't have a lot of security experts, so I'd very much want to see some review of that work before we go running off designing something ad hoc. The PyCA cryptography library has some discussion in their docs: https://cryptography.io/en/latest/limitations/ One possible way to move the discussion forward would be to ask the pyca devs what kind of API they'd like to see in the interpreter, if any. -n -- Nathaniel J. Smith -- https://vorpus.org _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/