I do from time to time go back and test long strings. I have a JSON
benchmark that's a favorite. If someone wants to do benchmarking, I'd
consider it a favor. I've considered asking on this list, in fact, but
I thought other priorities were higher. And this time in particular,
with Libmarpa's parse engine in the middle of a rewrite, is a bad time
to do it -- it'd be quite possible that your results would be irrelevant
in a few days. Once the algorithm settles benchmarking it would be
interesting.
Note that the rewrite, although I expect it to improve speed, is not
speed motivated. It is about offering more capabilities, simplifying
the code, and simplifying the math. I would certainly do it if it were
break even, or even a at small cost in speed.
Perl's overhead is understandable if you look at the services it
provides. '1 + 2' in perl requires creating two SV's (moderately
complex data structures), allocating memory, etc. with probably well
over a dozen subroutine calls. "1 + 2" in C would be optimized out at
compile time. "1 + x" would be more complicated -- perhaps as many as 2
instructions. This is considerably less time than the amortized garbage
collection overhead of one SV. Given all the extra services it
provides, it's quite an accomplishment that Perl is only 10-to-1, and
that speaks well of how it is programmed.
C is fast because it expects you to do all the thinking, protects you
from almost nothing and, when thing go wrong, expects you to figure out why.
-- jeffrey
On 01/15/2014 02:02 PM, Deyan Ginev wrote:
"Libmarpa takes less than 10% of the time, so a 10% change in its
speed is a 1% change in the overall result, which on Linux I do not
think is measureable."
Well isn't this a big problem? From my basic understanding, it seems
that libmarpa does all the heavy lifting parsing-wise (for grammars
without Perl semantic actions). So 90% is lost in "cosmetic" overheads
(such as compiling SLIF down to a libmarpa-compatible grammar object,
doing passes back-and-forth through XS and maybe doing the recognizer
traversal).
Would this 1:9 ratio change if a single grammar is provided with a
vast set of parse inputs? E.g. process 10,000 strings in a single
pass, so that the initialization overhead melts down. I could of
course be off my guess, and the 90% overhead is not mostly in
initialization but in some other parts of the processing that I am
currently falsely assuming to be done in libmarpa.
But making the 10% to 90% statement certainly makes Perl look bad in
this case (and alerts my curiosity as to why that's the case).
Deyan
On Wed, Jan 15, 2014 at 10:45 PM, Ron Savage <[email protected]
<mailto:[email protected]>> wrote:
Well done! This sounds like a big win, and having the nerve to
make significant changes like this is usually the sign of a great
programmer.
It means something else, of course, for those people who just
change things to fabricate the impression that they are 'doing
something.
--
You received this message because you are subscribed to the Google
Groups "marpa parser" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected]
<mailto:marpa-parser%[email protected]>.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google
Groups "marpa parser" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "marpa
parser" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.