On Wed, Nov 8, 2017 at 7:33 AM, Chris Barker <[email protected]> wrote: > On Tue, Nov 7, 2017 at 6:41 AM, Steven D'Aprano <[email protected]> wrote: >> In any case, I think that securing literal_eval is much simpler than >> securing eval: >> >> try: >> # a thousand character expression ought to be enough for >> # any legitimate purpose... >> value = literal_eval(tainted_string[:1000]) # untested >> except MemoryError: >> value = None > > > sure -- though I'd use a lot more than 1000 characters -- not much these > days, and you might want to unpack something like a JSON data package...
That's the trouble, though. It's perfectly safe to literal_eval a large amount of well-formed data (say, a dict display with simple keys and good-sized strings as values), but you can cause major problems by literal_evalling a relatively small amount of malicious data (eg "["*100 bombs out with MemoryError, and I wouldn't trust that there isn't something far worse). If you're working with untrusted data, you probably should be using json.loads rather than ast.literal_eval. -1 on hiding eval/exec; these features exist in many languages, and they're identically dangerous everywhere. Basically, use eval only with text from the owner of the system, not from anyone untrusted. ChrisA _______________________________________________ Python-ideas mailing list [email protected] https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
