Steven D'Aprano wrote: > Personally, I would never use eval on any string I didn't write myself. If > I was thinking about evaluating a user-string, I would always write a > function to parse the string and accept only the specific sort of data I > expected. In your case, a quick-and-dirty untested function might be:
for a more robust approach, you can use Python's tokenizer module, together with the iterator-based approach described here: http://online.effbot.org/2005_11_01_archive.htm#simple-parser-1 here's a (tested!) variant that handles lists and dictionaries as well: import cStringIO, tokenize def sequence(next, token, end): out = [] token = next() while token[1] != end: out.append(atom(next, token)) token = next() if token[1] == "," or token[1] == ":": token = next() return out def atom(next, token): if token[1] == "(": return tuple(sequence(next, token, ")")) elif token[1] == "[": return sequence(next, token, "]") elif token[1] == "{": seq = sequence(next, token, "}") res = {} for i in range(0, len(seq), 2): res[seq[i]] = seq[i+1] return res elif token[0] in (tokenize.STRING, tokenize.NUMBER): return eval(token[1]) # safe use of eval! raise SyntaxError("malformed expression (%s)" % token[1]) def simple_eval(source): src = cStringIO.StringIO(source).readline src = tokenize.generate_tokens(src) src = (token for token in src if token[0] is not tokenize.NL) res = atom(src.next, src.next()) if src.next()[0] is not tokenize.ENDMARKER: raise SyntaxError("bogus data after expression") return res (now waiting for paul to post the obligatory pyparsing example). </F> -- http://mail.python.org/mailman/listinfo/python-list