On 09 Mar 2015 14:48, Eric Blake wrote: > On 03/09/2015 01:50 PM, Bob Friesenhahn wrote: > > On Mon, 9 Mar 2015, Mike Gran wrote: > >> I don't know if y'all saw this blogpost where a guy pushed > >> the sed regular expression handling into bash-specific > >> regular expressions when bash was available. He claims > >> there's a significant performance improvement because of > >> reduced forking. > >> > >> http://harald.hoyer.xyz/2015/03/05/libtool-getting-rid-of-180000-sed-forks/ > > > > There is an issue in the libtool bug tracker regarding this. > > > > This solution only works with GNU bash. It would be good if volunteers > > could research to see if there are similar solutions which can work with > > other common shells (e.g. dash, ksh, zsh). > > For context, we're trying to speed up: > > sed_quote_subst='s|\([`"$\\]\)|\\\1|g' > _G_unquoted_arg=`printf '%s\n' "$1" |$SED "$sed_quote_subst"` > > How about this, which should be completely portable to XSI shells (alas, > it still uses ${a#b} and ${a%b} at the end, so it is not portable to > ancient Solaris /bin/sh): > > # func_quote STRING > # Escapes all \`"$ in STRING with another \, and stores that in $quoted > func_quote () { > case $1 in > *[\\\`\"\$]*) > save_IFS=$IFS pre=.$1. > for char in '\' '`' '"' '$'; do > post= IFS=$char > for part in $pre; do > post=${post:+$post\\$char}$part > done > pre=$post > done
should we test the size of the string first ? i've written such raw shell string parsing functions before, and once you hit a certain size (like 1k+ iirc), forking out to sed is way faster, especially when running in multibyte locales (like UTF8) which most people are doing nowadays. -mike
signature.asc
Description: Digital signature
_______________________________________________ https://lists.gnu.org/mailman/listinfo/libtool