We saw a significant speedup, but I think it may have been a red herring. We were checking overloaded operators in a stupid way: type check the arguments once to get their coarse type and resolve overloading, rewrite the expression to an ordinary application node and re-typecheck.
That strategy combined with a ~900-ary union in one of our lenses was slowing things down big time. Flattening the unions helped, but so did doing type checking in a smarter way (something we did after I wrote that note). Nate On Wed, Aug 13, 2008 at 2:26 PM, David Lutterkort <[EMAIL PROTECTED]> wrote: > On Sat, 2008-07-26 at 12:52 -0400, Nate Foster wrote: >> A trick we do in Boomerang, which may be useful if you really do need >> a lens union and can't push it down into a union of regexps, is to >> parse >> >> (l1 | l2 | l3 | l4) >> >> as >> >> ( ( l1 | l2 ) | ( l3 | l4 ) ) >> >> instead of >> >> ( l1 | ( l2 | ( l3 | l4 ) ) ) > > How much of a speedup did you see with Boomerang from doing this ? I > just tried that, and the performance improvement is underwhelming ... > runtimes for the typechecker vary by less than 10% between balanced and > unbalanced trees for union/concat, unfortunately the variation goes > either way, i.e. it speeds the typechecker up in some instances and > slows it down in others. > > David > > _______________________________________________ augeas-devel mailing list [email protected] https://www.redhat.com/mailman/listinfo/augeas-devel
