Also, the Kubuntu system Rakudo was built with --gen-moar, on the Arch
Linux system MoarMV, NQP, and Rakudo were all built separately with the
same --prefix.
On Wed, Jun 15, 2016 at 12:01 AM, Daniel Green wrote:
> Doesn't happen every time, seem to be about 1 in 5. Here are
Doesn't happen every time, seem to be about 1 in 5. Here are the results
for two different system.
This is an up-to-date Arch Linux
uname -a
Linux 4.5.3-1-ARCH #1 SMP PREEMPT Sat May 7 20:43:57 CEST 2016
x86_64 GNU/Linux
p6 --version
This is Rakudo version 2016.05-145-gac0dcdd built on MoarVM
Also, running it ~30 times without profiling never saw a crash.
On Wed, Jun 15, 2016 at 12:04 AM, Daniel Green <ddgr...@gmail.com> wrote:
> Also, the Kubuntu system Rakudo was built with --gen-moar, on the Arch
> Linux system MoarMV, NQP, and Rakudo were all built separately wit
There is only one file to look for: profile-\d+.html in your cwd.
And as a side note: do not profile code that runs that long. 8 minutes
of execution will produce an html file (with a json blob) of several
hundreds of megabytes. Your browser won't cope with that.
Try to profile only for a single
I tried that and while it was running my hard disk ran out of space. I am
not sure if it is related, but the process crashed and I could not find if
it created anything on the disk. Before trying again, I'd like to remove
anything it might have created. Where should I look for its temporary files?
If you're running Rakudo on MoarVM, try the --profile option. It will create
an HTML file that shows a lot of useful information, including time spent in
each routine, call graphs, GC allocations, etc.
Pm
On Wed, Dec 31, 2014 at 09:35:33AM +0200, Gabor Szabo wrote:
The Perl 6 Maven site is a
The Perl 6 Maven site is a static site generated by some Perl 6 code.
Currently it takes about 8 minutes to regenerate the 270 pages of the site
which is quite frustrating.
Is there already a tool I could use to profile my code to see which part
takes the longest time
so I can focus my
No one took the ticket. Hence, I'm closing it.
On Mon Jun 25 02:27:01 2007, ptc wrote:
This was one of my convert TODO items in the code into RT tickets
things, so it's not actually something I specifically wanted.
It's been two months since the last posting on this ticket. Evidently
no one is convinced enough of its importance to
On Sun, 24 Jun 2007, James Keenan via RT wrote:
On Tue Feb 13 08:06:53 2007, ptc wrote:
The profiling options used in config/init/defaults.pm are specific to
gcc. This should probably be specified in the relevant hints file.
The profiling options code in config/init/defaults.pm reads
kid51,
On 25/06/07, James Keenan via RT [EMAIL PROTECTED] wrote:
On Tue Feb 13 08:01:12 2007, ptc wrote:
The profiling options specified in config/init/defaults.pm should be
moved into their own 'step' of the configure process.
Paul: Can you explain your rationale for this?
This was one
On Tue Feb 13 08:06:53 2007, ptc wrote:
The profiling options used in config/init/defaults.pm are specific to
gcc. This should probably be specified in the relevant hints file.
The profiling options code in config/init/defaults.pm reads:
if ( $conf-options-get('profile') ) {
$conf
On Sun, 24 Jun 2007 21:13:14 -0700
James Keenan via RT [EMAIL PROTECTED] wrote:
The profiling options code in config/init/defaults.pm reads:
if ( $conf-options-get('profile') ) {
$conf-data-set(
cc_debug = -pg ,
ld_debug = -pg
# New Ticket Created by Paul Cochrane
# Please include the string: [perl #41496]
# in the subject line of all future correspondence about this issue.
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=41496
The profiling options specified in config/init/defaults.pm should be
moved
# New Ticket Created by Paul Cochrane
# Please include the string: [perl #41497]
# in the subject line of all future correspondence about this issue.
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=41497
The profiling options used in config/init/defaults.pm are specific to
gcc
particular
use.
- Runtime addition of traces/wrappers will be more important than
adding them at compile-time, considering that they're likely to be
mostly used for layering debugging/profiling/etc checks over existing
code, and in the debugger. So much more important, that I'd consider
skipping
to be mostly used
for layering debugging/profiling/etc checks over existing code, and in
the debugger. So much more important, that I'd consider skipping the
compile-time syntax.
- I like the Perl 6 way of specifying the point in the wrapping sub
where the wrapped sub should be called better than
On 1/19/07, Allison Randal [EMAIL PROTECTED] wrote:
(That also means you can monkey around with the args before passing them
on to the wrapped sub.)
- It should be possible both to put multiple wrappers/traces on a
subroutine, and to put wrappers around other wrappers. So these are all
development, as we can run the test suite faster.
(this is mainly tcl's test suite I'm talking about here.)
The problem here is that we don't know exactly where our bottlenecks
are,
where best to concentrate our optimizations. Parrot provides opcode
level
profiling, but a more helpful report for me would
[coke - Sun Aug 15 13:41:43 2004]:
Add profiling build options
(From the TODO file)
Is this really worth doing? Since profiling flags are compiler specific
isn't it better to just let them be set as additional CFLAGS?
-J
--
On Feb 20, 2006, at 23:28, Joshua Hoblitt via RT wrote:
Is this really worth doing? Since profiling flags are compiler
specific
isn't it better to just let them be set as additional CFLAGS?
How do I add to CFLAGS from the perl Configure.pl command?
leo
On Tue, Feb 21, 2006 at 01:02:23AM +0100, Leopold Toetsch wrote:
On Feb 20, 2006, at 23:28, Joshua Hoblitt via RT wrote:
Is this really worth doing? Since profiling flags are compiler
specific
isn't it better to just let them be set as additional CFLAGS?
How do I add to CFLAGS from
# New Ticket Created by Will Coleda
# Please include the string: [perl #31156]
# in the subject line of all future correspondence about this issue.
# URL: http://rt.perl.org:80/rt3/Ticket/Display.html?id=31156
Add profiling build options
(From the TODO file)
# New Ticket Created by Will Coleda
# Please include the string: [perl #31158]
# in the subject line of all future correspondence about this issue.
# URL: http://rt.perl.org:80/rt3/Ticket/Display.html?id=31158
of
opcodes performed (with a fixed minimum, though)
- don't calibrate opcode speeds to the faster-than-light
- pretty-print the profiling report (this is of course personal
taste) by always having the same number of columns, dropping
--- lines, and saying Title instead of TITLE.
--
Jarkko Hietaniemi
I've divided the profile counters for DOD into 4. We now have:
$ parrot -p tools/dev/bench_op.imc 'new $P0, .PerlInt'
...
-4 DOD_collect_PMC510.007713 0.1512
-3 DOD_collect_buffers510.003491 0.0685
-5 DOD_mark_next 510.003178
Has anyone tried kcachegrind to speed profile parrot? Based on what the web
page http://www.weidendorfers.de/kcachegrind/ says:
The trace includes the number of instruction/data memory accesses and
1st/2nd level cache misses, and relates it to source lines and functions
of the run
At 6:09 PM +0100 5/19/02, Nicholas Clark wrote:
On Sat, May 18, 2002 at 07:33:53PM -0400, Dan Sugalski wrote:
At 7:25 PM -0400 5/18/02, Melvin Smith wrote:
Yeh I know that word is yucky and from Java land, but in this case,
I think that
system PMCs should take liberties for optimization.
On Sat, May 18, 2002 at 07:33:53PM -0400, Dan Sugalski wrote:
At 7:25 PM -0400 5/18/02, Melvin Smith wrote:
Yeh I know that word is yucky and from Java land, but in this case,
I think that
system PMCs should take liberties for optimization.
*All* PMCs should take liberties for
I decided to do some profiling and tinkering and I picked the PerlInt class
since its one of the most common. There is a large gap between our
MOPS benchmarks when using the plain INT registers as opposed to
the PMC regs.
There seems to be much room for optimization in the PMC virtual
methods
At 7:25 PM -0400 5/18/02, Melvin Smith wrote:
Yeh I know that word is yucky and from Java land, but in this case,
I think that
system PMCs should take liberties for optimization.
*All* PMCs should take liberties for optimization. PMC vtable entries
are the only things that should know the
At 07:33 PM 5/18/2002 -0400, Dan Sugalski wrote:
At 7:25 PM -0400 5/18/02, Melvin Smith wrote:
Yeh I know that word is yucky and from Java land, but in this case, I
think that
system PMCs should take liberties for optimization.
*All* PMCs should take liberties for optimization. PMC vtable
At 7:35 PM -0400 5/18/02, Melvin Smith wrote:
At 07:33 PM 5/18/2002 -0400, Dan Sugalski wrote:
At 7:25 PM -0400 5/18/02, Melvin Smith wrote:
Yeh I know that word is yucky and from Java land, but in this
case, I think that
system PMCs should take liberties for optimization.
*All* PMCs should
Also, it's perfectly fine for a coordinated group of PMCs (like, say,
the ones that provide perl's base scalar behavior) to share grubby
internal knowledge, though I'd like to keep that under control, as it's
easy to get out of sync.
Ok, now that I'm looking closer, it appears my
of collection runs only turns into an 11%
improvement in total performance.
This prompted me to setup Parrot for profiling in VTune here on my
machine. What follows is a rather longish email, for which I
apologize. Lots of it is due to statistics, which I'm including to allow
people to draw
Dan Sugalski wrote:
I think perhaps a rewrite of life.pasm into perl with some
benchmarking would be in order before making that judgement.
Following is a rough perl5 version of life.pasm.
On my system [Pentium 166; linux 2.2.18; perl 5.6.1] this takes 96 to 97
seconds; CVS parrot takes 91 to
At 8:09 AM -0400 4/12/02, Michel J Lambert wrote:
Few things immediately come to mind:
a) with the current encoding system, we're guaranteed to be slower than
without it. If we want Parrot to be as fast as Perl5, we're deluding
ourselves.
I think perhaps a rewrite of life.pasm into perl with
This adds timing information to profiling. Yeah, that means we have to
call Parrot_floatval_time() twice for each op. You'll probably need to
iterate many, many times to see any time at all--for example, the
program:
set I0, 1
FOO:
dec I0
if I0, FOO
end
At 01:28 AM 1/13/2002 -0800, Brent Dax wrote:
This adds timing information to profiling.
I'm OK with this patch generally, but I can't get it to easily apply. If it
builds with no warnings because of it, go gommit it.
Dan
[EMAIL PROTECTED] writes:
Anyone surprised by the top few entries:
Nope. It looks close to what I saw when I profiled perl 5.004 and 5.005
running over innlog.pl and cleanfeed. The only difference is the method
stuff, since neither of those were OO apps. The current Perl seems to
spend most
This is from a perl5.7.0 (well the current perforce depot) compiled
with -pg and then run on a smallish example of my heavy OO day job app.
The app reads 7300 lines of "verilog" and parses it with (tweaked) Parse-Yapp
into tree of perl objects, messes with the parse tree and then calls
a method
41 matches
Mail list logo