The problem with current string encoding is that it is parsable, but non-human readable. It also complicates parsing by requiring 2 different decoding methods to be implemented.
It occurs to me that a URL encoding scheme would also meet the parsing requirements. Additionally: 1. It is always human readable. 2. There is only 1 encoding scheme. 3. Substring matching on encoded strings will always succeed. URL encoding is just one way to achieve this, and has the advantage of being widely implemented. However, the minimal requirements would be a scheme which encoded only separator characters (whitespace in this case) without the use of those separators. I'm sure this has been considered before. Given that it's a road I'm considering heading down, what were the reasons for not doing it? Thanks, Matt -- Matthew Booth, RHCA, RHCSS Red Hat, Global Professional Services M: +44 (0)7977 267231 GPG ID: D33C3490 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
signature.asc
Description: This is a digitally signed message part
-- Linux-audit mailing list [email protected] https://www.redhat.com/mailman/listinfo/linux-audit
