Slow???

2023-08-17 Thread ToddAndMargo via perl6-users

Hi All

Fedora 38

$ raku -v
Welcome to Rakudo™ v2023.06.
Implementing the Raku® Programming Language v6.d.
Built on MoarVM version 2023.06.


Has the compiler gotten really, really, really
slow lately?

What normally takes 8 seconds is now dragging on
for minutes,

$ raku -c GetUpdates.pl6

Perplexed,
-T


Found the source of the drag, but it is still slow.

 #`(
 Aug 08, 2023
 
 href="https://www.adobe.com/devnet-docs/acrobatetk/tools/ReleaseNotesDC/continuous/dccontinuousaug2023.html\
  #dccontinuousaugtwentytwentythree" 
disablelinktracking="false">\

  DC Aug 2023 (23.003.20269)
 
 Continuous
 
 Latest Release: This update provides new features, 
security mitigations, feature enhancements, and bug fixes.

  }

#`( should have been #`{



--
~~
Computers are like air conditioners.
They malfunction when you open windows
~~


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users

On 4/28/19 4:26 PM, ToddAndMargo via perl6-users wrote:

On 4/20/19 8:58 PM, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


Follow up:  Figured it out.  Well, ALMOST.  (Watch the weasel word.)


First I had to fix up zef.  I posted it on another thread, but
I will repost it here too:


On 4/28/19 4:09 PM, ToddAndMargo via perl6-users wrote:   > Well now, 
too many cooks spoil the stew.


I did a full `dnf remove rakudo rakudo-zef`

Then every "zef" entry (~.zif, /root/.zef, etc.) on my hard
drive and erased it.

And, every perl6 directory (opt, lib64, ~.perl6, etc.)
and erased is too.

Then reinstalled rakudo and rakudo-zef.

Now zef is working right.

Apparently, I hade too many different types/version
of installs out there and everything was getting all
mixed up. 




And low and behold, "Stage Parse" went
from 13 seconds to 6 seconds:

$ perl6 --stagestats GetUpdates.pl6
Stage start  :   0.000
Stage parse  :   5.727
Stage syntaxcheck:   0.000
Stage ast    :   0.000
Stage optimize   :   0.369
Stage mast   :   1.210
Stage mbc    :   0.021
Stage moar   :   0.000
GetUpdates.pl6
Mozilla Mirror 
Debug is OFF


Still about 5 seconds too slow, but a vast improvement.

Thanks to everyone for all the tips and help!

-T



[18:18]  ToddAndMargo: stage parse is where the
compiler parses your source code into an AST (abstract syntax
tree) before turning the AST into bytecode. it uses rakudo's
grammar engine, which isn't very optimized yet


--
~~
Computers are like air conditioners.
They malfunction when you open windows
~~


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users

On 4/20/19 8:58 PM, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


Follow up:  Figured it out.  Well, ALMOST.  (Watch the weasel word.)


First I had to fix up zef.  I posted it on another thread, but
I will repost it here too:


On 4/28/19 4:09 PM, ToddAndMargo via perl6-users wrote:   > Well now, 
too many cooks spoil the stew.


I did a full `dnf remove rakudo rakudo-zef`

Then every "zef" entry (~.zif, /root/.zef, etc.) on my hard
drive and erased it.

And, every perl6 directory (opt, lib64, ~.perl6, etc.)
and erased is too.

Then reinstalled rakudo and rakudo-zef.

Now zef is working right.

Apparently, I hade too many different types/version
of installs out there and everything was getting all
mixed up. 




And low and behold, "Stage Parse" went
from 13 seconds to 6 seconds:

$ perl6 --stagestats GetUpdates.pl6
Stage start  :   0.000
Stage parse  :   5.727
Stage syntaxcheck:   0.000
Stage ast:   0.000
Stage optimize   :   0.369
Stage mast   :   1.210
Stage mbc:   0.021
Stage moar   :   0.000
GetUpdates.pl6
Mozilla Mirror 
Debug is OFF


Still about 5 seconds too slow, but a vast improvement.

Thanks to everyone for all the tips and help!

-T


Re: Why so slow

2019-04-28 Thread Joseph Brenner
Brad Gilbert  wrote:
> For one it has the following line:
>
> use lib 'lib';
>
> That is going to slow everything down if you have anything in the
> `lib` directory.
> The more things in that directory, the slower it will get.


I've been seeing some pretty slow perl6 one-line invocations,
where something like this might take 10 secs:

   perl6 -e'say "yellow"'

This evidently has to do with this perl alias I use that loads a module:

   alias perl6='perl6 -Mmethod-menu'

I do have a number of locations in my $PERL6LIB, and while they
could certainly be trimmed down, the sheer number of them don't
seem excessive to me (certainly not compared to my PERL5LIB):


  echo $PERL6LIB | tr ",", "\n"
  /home/doom/End/Cave/IntrospectP6/Wall/Object-Examine/lib/
  /home/doom/End/Cave/IntrospectP6/Wall/Augment-Util/lib
  /home/doom/End/Cave/IntrospectP6/Wall/Symbol-Scan/lib
  /home/doom/End/Cave/IntrospectP6/Wall/method-menu/lib
  /home/doom/End/Cave/IntrospectP6/Wall/perl6-object-examine/lib/
  /home/doom/End/Cave/IntrospectP6/Wall/perl6-augment-util/lib
  /home/doom/End/Cave/IntrospectP6/Wall/perl6-symbol-scan/lib
  /home/doom/End/Cave/IntrospectP6/Wall/perl6-method-menu/lib
  /home/doom/End/Cave/Perl6/lib
  /home/doom/End/Cave/Eye/lib/perl6
  /home/doom/lib/perl6
  /home/doom/End/Sys/Perl6/perl6-Perl6-Tidy-master/lib
  /home/doom/End/Sys/Perl6/perl6-Perl6-Parser-master/lib
  /home/doom/End/Sys/Perl6/p6-JSON-Pretty-master/lib


Cutting those locations down to just three (most of them were
empty, anyway):

  echo $PERL6LIB | tr ",", "\n"
  /home/doom/End/Cave/Perl6/lib
  /home/doom/End/Cave/Eye/lib/perl6
  /home/doom/lib/perl6

Gets you a roughly 6x speed-up:

  time perl6 -e'say "Sy..."'
  Sy...

  real  0m1.685s


Re: Why so slow

2019-04-28 Thread Joseph Brenner
ToddAndMargo via perl6-users  wrote:
>  David Christensen wrote:
>> We discussed this at our San Francisco Perl Mongers meeting today:
>
> Any Perl 5 guys there?  And did they get "grouchy" with you
> for using Perl 6?

We've been doing an "Informal Perl6 Study Group" over at the Oakland
Museum cafe every weekend.  The idea is actually to talk about perl6
stuff, though pretty often we actually veer off into perl5, R, etc.
And I don't usually get grumpy about it myself,  though I'm a pretty
grumpy character by nature.


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users

On 4/28/19 2:21 AM, Timo Paulssen wrote:
the strace command ended up only tracing the shell script "perl6", which 
very quickly execs moar, at which point strace considers its job done. 
there was barely any output at all for that reason.


fortunately we can just add -f to the strace command so that it follows 
processes as they are spawned.


does /home/linuxutil have many files and folders in it?


There are 275 files in it.  Basically all my perl 5, perl 6
and bash scripts, along with a ton of supporting file (tmp, etc.).
I will vpaste an ls if you need it,



was the output from RAKUDO_MODULE_DEBUG going smoothly, or were there 
any points where it did any very long pauses?


Could not tell.  I sent the output to a file so I could vpaste it

it does look a little like some of your individual modules have their 
own "use lib" commands in them, but i'm not exactly sure how it 
influences precompilation and such.


They do.

Is there another way to get Perl to looks for modules in other directories?


I am out of business for the time being until I get that zef error fixed

Thank you for all the help with this!

-T


Re: Why so slow

2019-04-28 Thread Timo Paulssen
the strace command ended up only tracing the shell script "perl6", which 
very quickly execs moar, at which point strace considers its job done. 
there was barely any output at all for that reason.


fortunately we can just add -f to the strace command so that it follows 
processes as they are spawned.


does /home/linuxutil have many files and folders in it?

was the output from RAKUDO_MODULE_DEBUG going smoothly, or were there 
any points where it did any very long pauses?


it does look a little like some of your individual modules have their 
own "use lib" commands in them, but i'm not exactly sure how it 
influences precompilation and such.


On 28/04/2019 09:21, ToddAndMargo via perl6-users wrote:




Hi Timo,

This tell you anything?

$ perl6 --stagestats GetUpdates.pl6
Stage start  :   0.000
Stage parse  :  13.150
Stage syntaxcheck:   0.000
Stage ast    :   0.000
Stage optimize   :   0.351
Stage mast   :   1.133
Stage mbc    :   0.019
Stage moar   :   0.000

GetUpdates.pl6  <-- my program starts here
Mozilla Mirror 
Debug is OFF


The "Stage parse : 13.150" is eating me alive!

-T


On 4/28/19 12:01 AM, Timo Paulssen wrote:> Please give this a try:
>
>  env RAKUDO_MODULE_DEBUG=1 perl6 GetUpdates.pl6
>
> and tell me if any of the lines it spits out takes considerable amounts
> of time before the next one shows up.
>
> Then, you can also
>
>  strace -e stat perl6 GetUpdates.pl6
>
> to see if it's going through a whole load of files.
>
> without the "-e stat" you will get a whole lot more output, but it'll
> tell you pretty much everything that it's asking the kernel to do.
> Pasting that whole file could be a privacy concern, especially if it
> iterates your entire home directory, which is still my working 
hypothesis.

>
> HTH
>    - Timo
>

My home directory is pretty small, except for .wine.
All the good stuff is on my network shares to share
with my numerous VM's


$ env RAKUDO_MODULE_DEBUG=1 perl6 GetUpdates.pl6 > GetUpdates.debug 2>&1
http://vpaste.net/xmwcd


$ strace -e stat perl6 GetUpdates.pl6 > GetUpdates.debug 2>&1
http://vpaste.net/8ekeI


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users

On 4/27/19 10:40 PM, David Christensen wrote:

We discussed this at our San Francisco Perl Mongers meeting today:



Any Perl 5 guys there?  And did they get "grouchy" with you
for using Perl 6?  Did they call Perl 6 "Java" by any chance?

Chuckle.


Re: Why so slow

2019-04-28 Thread David Christensen

On 4/27/19 10:40 PM, David Christensen wrote:

I suggested that he install the official package:

https://rakudo.org/files



The Rakudo site is degraded:

"Currently, rakudo.org is being served from a back-up server that 
doesn't have the download files."



I had previously downloaded installers for Debian and for macOS:

2019-04-28 00:14:14 dpchrist@tinkywinky ~/samba-dpchrist/Downloads/p/perl6
$ ls -1hs rakudo-*
 11M rakudo-pkg-Debian9_2019.03.1-01_amd64.deb
6.0K rakudo-pkg-Debian9_2019.03.1-01_amd64.deb.sha1
 25M rakudo-star-2019.03.dmg


If anyone needs them, send me an e-mail off list.


David


Re: Why so slow

2019-04-28 Thread David Christensen

On 4/28/19 12:07 AM, Timo Paulssen wrote:

I'm writing a program called moarperf, which is a local web app written in Cro 
that doesn't touch the network outside of loopback. It just has to build its 
JavaScript blobs once by downloading like a brazillion libraries from npm.


That should be useful.



Also, comma complete comes with support for profiling, which also doesn't need 
a live net connection.


https://commaide.com/



Finally, I think at least Debian patches the profiler html app to point at an 
angularjs downloaded from Debian repositories. It's quite feasible to have an 
env var for nqp/rakudo that changes the path to the js libraries to something 
local.


I use Mozilla Firefox with the NoScript plug-in -- that's how I saw that 
JavaScript is required.  The profile pages seem to work on Debian at the 
present time if I do nothing (JavaScript is cached?), but requires 
JavaScript to be explicitly enabled on macOS.



David


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users

On 4/28/19 12:01 AM, Timo Paulssen wrote:

especially if it iterates your entire home directory


Don't think so


$ rm -rf ~/.perl6/precomp

$ perl6 --stagestats GetUpdates.pl6
Stage start  :   0.000
Stage parse  :  13.201
Stage syntaxcheck:   0.000
Stage ast:   0.000
Stage optimize   :   0.362
Stage mast   :   1.099
Stage mbc:   0.029
Stage moar   :   0.000
GetUpdates.pl6
Mozilla Mirror 
Debug is OFF

Does this have anyting to do with it?

17: use lib '/home/linuxutil/p6lib';

'use lib' may not be pre-compiled
at /home/linuxutil/p6lib/CurlUtils.pm6 (CurlUtils):17


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users





Hi Timo,

This tell you anything?

$ perl6 --stagestats GetUpdates.pl6
Stage start  :   0.000
Stage parse  :  13.150
Stage syntaxcheck:   0.000
Stage ast    :   0.000
Stage optimize   :   0.351
Stage mast   :   1.133
Stage mbc    :   0.019
Stage moar   :   0.000

GetUpdates.pl6  <-- my program starts here
Mozilla Mirror 
Debug is OFF


The "Stage parse : 13.150" is eating me alive!

-T


On 4/28/19 12:01 AM, Timo Paulssen wrote:> Please give this a try:
>
>  env RAKUDO_MODULE_DEBUG=1 perl6 GetUpdates.pl6
>
> and tell me if any of the lines it spits out takes considerable amounts
> of time before the next one shows up.
>
> Then, you can also
>
>  strace -e stat perl6 GetUpdates.pl6
>
> to see if it's going through a whole load of files.
>
> without the "-e stat" you will get a whole lot more output, but it'll
> tell you pretty much everything that it's asking the kernel to do.
> Pasting that whole file could be a privacy concern, especially if it
> iterates your entire home directory, which is still my working 
hypothesis.

>
> HTH
>- Timo
>

My home directory is pretty small, except for .wine.
All the good stuff is on my network shares to share
with my numerous VM's


$ env RAKUDO_MODULE_DEBUG=1 perl6 GetUpdates.pl6 >   GetUpdates.debug 2>&1
http://vpaste.net/xmwcd


$ strace -e stat perl6 GetUpdates.pl6 > GetUpdates.debug 2>&1
http://vpaste.net/8ekeI


Re: Why so slow

2019-04-28 Thread Timo Paulssen
I'm writing a program called moarperf, which is a local web app written in Cro 
that doesn't touch the network outside of loopback. It just has to build its 
JavaScript blobs once by downloading like a brazillion libraries from npm.

Also, comma complete comes with support for profiling, which also doesn't need 
a live net connection.

Finally, I think at least Debian patches the profiler html app to point at an 
angularjs downloaded from Debian repositories. It's quite feasible to have an 
env var for nqp/rakudo that changes the path to the js libraries to something 
local.

On 28 April 2019 09:01:28 CEST, David Christensen  
wrote:
>> On 21/04/2019 05:58, ToddAndMargo via perl6-users wrote:
>>> Hi All,
>>> 
>>> One liners are fast, but my own programs are very slow to start.
>>> 
>>> I download
>>> 
>>> https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6
>>>
>>>
>>> To check it out and it also takes ten second to start. >>
>>> What gives?
>>> 
>>> Many thanks, -T
>
>
>On 4/27/19 11:14 PM, Timo Paulssen wrote:
>> You don't happen to have a PERL6LIB or -I pointed at a folder with
>> loads of stuff in it? If that is the case, having a single "use"
>> statement will cause rakudo to iterate through all files and
>> subfolders, which can take a long time if you've got, for example,
>> your home directory in that list (be it via -I. or PERL6LIB=. or
>> explicitly mentioning the big folder)
>> 
>> There's many different tools to find out what's going on. For the
>> "too big perl6lib folder" problem, "strace" will give you a good hint
>> by giving one "stat" command for every file under your home
>> directory.
>> 
>> Other than that, check "perl6 --stagestats -e 'blah'" or "perl6 
>> --stagestats FooBar.p6", which will give timings for the different 
>> phases, most notably "parse", "optimize", and "mbc". loading modules
>> and precompiling are part of the parse stage.
>> 
>> the "time" command will split your run time between "system" and
>> "user" time (as well as wallclock time). if "system" is particularly
>> high, then the program spends a lot of time asking the kernel to do
>> stuff (such as iterating files on the filesystem for the PERL6LIB
>> case i've menitoned above).
>> 
>> If none of that helps, startup can be profiled with "perl6 
>> --profile-compile blah" and run time can be profiled with "perl6 
>> --profile blah". The default output will be a html file that you can
>>  just open in your browser to get an interactive performance
>> inspection tool thingie. Be aware, though, that it can become very
>> big in the case of --profile-compile, depending on the structure of
>> the program being compiled.
>> 
>> Hope any of that helps - Timo
>
>
>Yes, very useful -- thank you.  :-)
>
>
>(I do not seem to have a man page for Perl 6, but 'perl6 --help' gives
>a 
>brief overview of those options.)
>
>
>The only drawback is that the HTML profile reports require JavaScript
>and a live Internet connection to function.
>
>
>David

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Why so slow

2019-04-28 Thread Timo Paulssen

Please give this a try:

    env RAKUDO_MODULE_DEBUG=1 perl6 GetUpdates.pl6

and tell me if any of the lines it spits out takes considerable amounts 
of time before the next one shows up.


Then, you can also

    strace -e stat perl6 GetUpdates.pl6

to see if it's going through a whole load of files.

without the "-e stat" you will get a whole lot more output, but it'll 
tell you pretty much everything that it's asking the kernel to do. 
Pasting that whole file could be a privacy concern, especially if it 
iterates your entire home directory, which is still my working hypothesis.


HTH
  - Timo

On 28/04/2019 08:41, ToddAndMargo via perl6-users wrote:

> On 21/04/2019 05:58, ToddAndMargo via perl6-users wrote:>> Hi All,
>>
>> One liners are fast, but my own programs are very slow to start.
>>
>> I download
>>
>> https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6
>>
>> To check it out and it also takes ten second to start.
>>
>> What gives?
>>
>> Many thanks,
>> -T

On 4/27/19 11:14 PM, Timo Paulssen wrote:
You don't happen to have a PERL6LIB or -I pointed at a folder with 
loads of stuff in it? If that is the case, having a single "use" 
statement will cause rakudo to iterate through all files and 
subfolders, which can take a long time if you've got, for example, 
your home directory in that list (be it via -I. or PERL6LIB=. or 
explicitly mentioning the big folder)


There's many different tools to find out what's going on. For the 
"too big perl6lib folder" problem, "strace" will give you a good hint 
by giving one "stat" command for every file under your home directory.


Other than that, check "perl6 --stagestats -e 'blah'" or "perl6 
--stagestats FooBar.p6", which will give timings for the different 
phases, most notably "parse", "optimize", and "mbc". loading modules 
and precompiling are part of the parse stage.


the "time" command will split your run time between "system" and 
"user" time (as well as wallclock time). if "system" is particularly 
high, then the program spends a lot of time asking the kernel to do 
stuff (such as iterating files on the filesystem for the PERL6LIB 
case i've menitoned above).


If none of that helps, startup can be profiled with "perl6 
--profile-compile blah" and run time can be profiled with "perl6 
--profile blah". The default output will be a html file that you can 
just open in your browser to get an interactive performance 
inspection tool thingie. Be aware, though, that it can become very 
big in the case of --profile-compile, depending on the structure of 
the program being compiled.


Hope any of that helps
   - Timo



Hi Timo,

This tell you anything?

$ perl6 --stagestats GetUpdates.pl6
Stage start  :   0.000
Stage parse  :  13.150
Stage syntaxcheck:   0.000
Stage ast    :   0.000
Stage optimize   :   0.351
Stage mast   :   1.133
Stage mbc    :   0.019
Stage moar   :   0.000

GetUpdates.pl6  <-- my program starts here
Mozilla Mirror 
Debug is OFF


The "Stage parse : 13.150" is eating me alive!

-T


Re: Why so slow

2019-04-28 Thread David Christensen

On 21/04/2019 05:58, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6


To check it out and it also takes ten second to start. >>
What gives?

Many thanks, -T



On 4/27/19 11:14 PM, Timo Paulssen wrote:

You don't happen to have a PERL6LIB or -I pointed at a folder with
loads of stuff in it? If that is the case, having a single "use"
statement will cause rakudo to iterate through all files and
subfolders, which can take a long time if you've got, for example,
your home directory in that list (be it via -I. or PERL6LIB=. or
explicitly mentioning the big folder)

There's many different tools to find out what's going on. For the
"too big perl6lib folder" problem, "strace" will give you a good hint
by giving one "stat" command for every file under your home
directory.

Other than that, check "perl6 --stagestats -e 'blah'" or "perl6 
--stagestats FooBar.p6", which will give timings for the different 
phases, most notably "parse", "optimize", and "mbc". loading modules

and precompiling are part of the parse stage.

the "time" command will split your run time between "system" and
"user" time (as well as wallclock time). if "system" is particularly
high, then the program spends a lot of time asking the kernel to do
stuff (such as iterating files on the filesystem for the PERL6LIB
case i've menitoned above).

If none of that helps, startup can be profiled with "perl6 
--profile-compile blah" and run time can be profiled with "perl6 
--profile blah". The default output will be a html file that you can

 just open in your browser to get an interactive performance
inspection tool thingie. Be aware, though, that it can become very
big in the case of --profile-compile, depending on the structure of
the program being compiled.

Hope any of that helps - Timo



Yes, very useful -- thank you.  :-)


(I do not seem to have a man page for Perl 6, but 'perl6 --help' gives a 
brief overview of those options.)



The only drawback is that the HTML profile reports require JavaScript
and a live Internet connection to function.


David


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users

> On 21/04/2019 05:58, ToddAndMargo via perl6-users wrote:>> Hi All,
>>
>> One liners are fast, but my own programs are very slow to start.
>>
>> I download
>>
>> https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6
>>
>> To check it out and it also takes ten second to start.
>>
>> What gives?
>>
>> Many thanks,
>> -T

On 4/27/19 11:14 PM, Timo Paulssen wrote:
You don't happen to have a PERL6LIB or -I pointed at a folder with loads 
of stuff in it? If that is the case, having a single "use" statement 
will cause rakudo to iterate through all files and subfolders, which can 
take a long time if you've got, for example, your home directory in that 
list (be it via -I. or PERL6LIB=. or explicitly mentioning the big folder)


There's many different tools to find out what's going on. For the "too 
big perl6lib folder" problem, "strace" will give you a good hint by 
giving one "stat" command for every file under your home directory.


Other than that, check "perl6 --stagestats -e 'blah'" or "perl6 
--stagestats FooBar.p6", which will give timings for the different 
phases, most notably "parse", "optimize", and "mbc". loading modules and 
precompiling are part of the parse stage.


the "time" command will split your run time between "system" and "user" 
time (as well as wallclock time). if "system" is particularly high, then 
the program spends a lot of time asking the kernel to do stuff (such as 
iterating files on the filesystem for the PERL6LIB case i've menitoned 
above).


If none of that helps, startup can be profiled with "perl6 
--profile-compile blah" and run time can be profiled with "perl6 
--profile blah". The default output will be a html file that you can 
just open in your browser to get an interactive performance inspection 
tool thingie. Be aware, though, that it can become very big in the case 
of --profile-compile, depending on the structure of the program being 
compiled.


Hope any of that helps
   - Timo



Hi Timo,

This tell you anything?

$ perl6 --stagestats GetUpdates.pl6
Stage start  :   0.000
Stage parse  :  13.150
Stage syntaxcheck:   0.000
Stage ast:   0.000
Stage optimize   :   0.351
Stage mast   :   1.133
Stage mbc:   0.019
Stage moar   :   0.000

GetUpdates.pl6  <-- my program starts here
Mozilla Mirror 
Debug is OFF


The "Stage parse : 13.150" is eating me alive!

-T


Re: Why so slow

2019-04-28 Thread ToddAndMargo via perl6-users

On 4/27/19 10:40 PM, David Christensen wrote:
What is your operating system?  


Fedora 29 x64
Xfce 4.13
$ uname -r
5.0.7-200.fc29.x86_64



What is your Perl 6?


$ rpm -qa rakudo
rakudo-0.2019.03-1.fc29.x86_64

Also tried:
https://github.com/nxadm/rakudo-pkg/releases
 rakudo-pkg-Fedora29-2018.11-01.x86_64.rpm
 rakudo-pkg-Fedora29-2019.03.1-01.x86_64.rpm
No symptom change


Re: Why so slow

2019-04-28 Thread Timo Paulssen
You don't happen to have a PERL6LIB or -I pointed at a folder with loads 
of stuff in it? If that is the case, having a single "use" statement 
will cause rakudo to iterate through all files and subfolders, which can 
take a long time if you've got, for example, your home directory in that 
list (be it via -I. or PERL6LIB=. or explicitly mentioning the big folder)


There's many different tools to find out what's going on. For the "too 
big perl6lib folder" problem, "strace" will give you a good hint by 
giving one "stat" command for every file under your home directory.


Other than that, check "perl6 --stagestats -e 'blah'" or "perl6 
--stagestats FooBar.p6", which will give timings for the different 
phases, most notably "parse", "optimize", and "mbc". loading modules and 
precompiling are part of the parse stage.


the "time" command will split your run time between "system" and "user" 
time (as well as wallclock time). if "system" is particularly high, then 
the program spends a lot of time asking the kernel to do stuff (such as 
iterating files on the filesystem for the PERL6LIB case i've menitoned 
above).


If none of that helps, startup can be profiled with "perl6 
--profile-compile blah" and run time can be profiled with "perl6 
--profile blah". The default output will be a html file that you can 
just open in your browser to get an interactive performance inspection 
tool thingie. Be aware, though, that it can become very big in the case 
of --profile-compile, depending on the structure of the program being 
compiled.


Hope any of that helps
  - Timo

On 21/04/2019 05:58, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


Re: Why so slow

2019-04-27 Thread David Christensen

On 4/20/19 8:58 PM, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T



We discussed this at our San Francisco Perl Mongers meeting today:

1.  Works for me:

2019-04-27 22:32:13 dpchrist@tinkywinky ~
$ cat /etc/debian_version
9.8

2019-04-27 22:32:18 dpchrist@tinkywinky ~
$ uname -a
Linux tinkywinky 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) 
x86_64 GNU/Linux


2019-04-27 22:32:21 dpchrist@tinkywinky ~
$ ls Downloads/rakudo-pkg-Debian9_2019.03.1-01_amd64.deb
Downloads/rakudo-pkg-Debian9_2019.03.1-01_amd64.deb

2019-04-27 22:32:23 dpchrist@tinkywinky ~
$ dpkg-query --list rakudo*
Desired=Unknown/Install/Remove/Purge/Hold
| 
Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend

|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name   Version  Architecture Description
+++-==---=
ii  rakudo-pkg 2019.03.1-01 amd64Rakudo Perl 6 runtime

2019-04-27 22:32:30 dpchrist@tinkywinky ~
$ cat sandbox/perl6/hello.p6
#!/usr/bin/env perl6
print "hello, world!\n";

2019-04-27 22:32:37 dpchrist@tinkywinky ~
$ time sandbox/perl6/hello.p6
hello, world!

real0m0.276s
user0m0.332s
sys 0m0.068s

2.  Worked for another person with macOS.

3.  10~15 second start-up delays for another person with Debian 9 and 
Perl 6 built from source.  top(1) showed 100+ %CPU during the start-up 
delays.  Once inside the REPL, there were no obvious delays.  I 
suggested that he install the official package:


https://rakudo.org/files


What is your operating system?  What is your Perl 6?


Is there a way to run Perl 6 with debugging, logging, profiling, etc., 
enabled, so that the user can see what Perl 6 is doing and how long it 
takes?



David


Re: Why so slow

2019-04-27 Thread Brad Gilbert
>From 
>https://brrt-to-the-future.blogspot.com/2018/07/perl-6-on-moarvm-has-had-jit-for-few.html

> PS: You might read this and be reasonably surprised that Rakudo Perl 6 is 
> not, after all this, very fast yet. I have a - not entirely serious - 
> explanation for that:

> 1. All problems in computer science can be solved with a layer of indirection.
> 2. Many layers of indirection make programs slow.
> 3. Perl 6 solves many computer science problems for you ;-)

>. In the future, we'll continue to solve those problems, just faster.

Basically, even minor things like adding two numbers together, involve
half a dozen or more layers of indirection.
(The optimizer eliminates some of the needless layers of indirection.)

On Sat, Apr 27, 2019 at 6:53 PM ToddAndMargo via perl6-users
 wrote:
>
> On 4/20/19 8:58 PM, ToddAndMargo via perl6-users wrote:
> > Hi All,
> >
> > One liners are fast, but my own programs are very slow to start.
> >
> > I download
> >
> > https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6
> >
> > To check it out and it also takes ten second to start.
> >
> > What gives?
> >
> > Many thanks,
> > -T
>
>
> For comparison purposes, I do have similar sized Perl 5 programs
> with similar amounts of modules, that start almost instantly.


Re: Why so slow

2019-04-27 Thread ToddAndMargo via perl6-users

On 4/20/19 8:58 PM, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T



For comparison purposes, I do have similar sized Perl 5 programs
with similar amounts of modules, that start almost instantly.


Re: Why so slow

2019-04-24 Thread ToddAndMargo via perl6-users





Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T



On 4/24/19 5:13 AM, Brad Gilbert wrote:> For one it has the following line:
>
>  use lib 'lib';
>
> That is going to slow everything down if you have anything in the
> `lib` directory.
> The more things in that directory, the slower it will get.
>
> You should really install the modules with `zef`. (It can download and
> install the modules itself.)
>
> Also it isn't until the very last line of that program that anything
> visible happens.
>
> Basically it has a lot of work to do before it can make itself visible.
>
> On Wed, Apr 24, 2019 at 2:46 AM ToddAndMargo via perl6-users
>  wrote:

Hi Brad,

I just used that as an example that could be downloaded, rather than
posting my own code.  Commenting out the "use lib" made no
difference.  It takes ten second for my code to start.

Also, I do not understand what you mean by "zef".  If I don't
use zef to install the modules called out, my code won't run.

-T


Re: Why so slow

2019-04-24 Thread Brad Gilbert
For one it has the following line:

use lib 'lib';

That is going to slow everything down if you have anything in the
`lib` directory.
The more things in that directory, the slower it will get.

You should really install the modules with `zef`. (It can download and
install the modules itself.)

Also it isn't until the very last line of that program that anything
visible happens.

Basically it has a lot of work to do before it can make itself visible.

On Wed, Apr 24, 2019 at 2:46 AM ToddAndMargo via perl6-users
 wrote:
>
> Hi All,
>
> One liners are fast, but my own programs are very slow to start.
>
> I download
>
> https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6
>
> To check it out and it also takes ten second to start.
>
> What gives?
>
> Many thanks,
> -T


Why so slow

2019-04-24 Thread ToddAndMargo via perl6-users

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


Re: Why so slow?

2019-04-23 Thread ToddAndMargo via perl6-users

On 4/21/19 6:32 PM, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


I just upgraded from

rakudo-pkg-Fedora29-2018.11-01.x86_64.rpm

to

rakudo-pkg-Fedora29-2019.03.1-01.x86_64.rpm

Still ssl


Re: Why so slow?

2019-04-23 Thread ToddAndMargo via perl6-users

On 4/22/19 4:21 AM, ToddAndMargo via perl6-users wrote:



On 22/04/2019 12:51, ToddAndMargo via perl6-users wrote:

On 22/04/2019 03:32, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


On 4/22/19 3:35 AM, Timo Paulssen wrote:

It's quite possible that when you start that program, you're first
waiting for GTK::Simple to be precompiled; the "use lib 'lib'" can
interfere with the storage of precompilation results. If you have
installed GTK::Simple (for example by going to its folder and running
"zef install .") and removed the line "use lib 'lib';", it might 
start a

lot faster when you run it the second time.

Hope that helps!
    - Timo





Not seeing a `use lib`.  :'(

use v6;
use lib 'lib';
use GTK::Simple;
use GTK::Simple::App;


On 4/22/19 4:02 AM, Timo Paulssen wrote:> It's the second line:
 >
 > use v6;
 > use lib 'lib';  # ← this line right here
 > use GTK::Simple;
 > use GTK::Simple::App;
 >
 >    -Timo
 >

Made no difference.  :'(



Anyone?



--
~~
Computers are like air conditioners.
They malfunction when you open windows
~~


Re: Why so slow?

2019-04-22 Thread ToddAndMargo via perl6-users




On 22/04/2019 12:51, ToddAndMargo via perl6-users wrote:

On 22/04/2019 03:32, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


On 4/22/19 3:35 AM, Timo Paulssen wrote:

It's quite possible that when you start that program, you're first
waiting for GTK::Simple to be precompiled; the "use lib 'lib'" can
interfere with the storage of precompilation results. If you have
installed GTK::Simple (for example by going to its folder and running
"zef install .") and removed the line "use lib 'lib';", it might start a
lot faster when you run it the second time.

Hope that helps!
    - Timo





Not seeing a `use lib`.  :'(

use v6;
use lib 'lib';
use GTK::Simple;
use GTK::Simple::App;


On 4/22/19 4:02 AM, Timo Paulssen wrote:> It's the second line:
>
> use v6;
> use lib 'lib';  # ← this line right here
> use GTK::Simple;
> use GTK::Simple::App;
>
>-Timo
>

Made no difference.  :'(


Re: Why so slow?

2019-04-22 Thread Timo Paulssen
It's the second line:

use v6;
use lib 'lib';  # ← this line right here
use GTK::Simple;
use GTK::Simple::App;

  -Timo

On 22/04/2019 12:51, ToddAndMargo via perl6-users wrote:
>>> On 22/04/2019 03:32, ToddAndMargo via perl6-users wrote:
>>>> Hi All,
>>>>
>>>> One liners are fast, but my own programs are very slow to start.
>>>>
>>>> I download
>>>>
>>>> https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6
>>>>
>>>> To check it out and it also takes ten second to start.
>>>>
>>>> What gives?
>>>>
>>>> Many thanks,
>>>> -T
>
> On 4/22/19 3:35 AM, Timo Paulssen wrote:
>> It's quite possible that when you start that program, you're first
>> waiting for GTK::Simple to be precompiled; the "use lib 'lib'" can
>> interfere with the storage of precompilation results. If you have
>> installed GTK::Simple (for example by going to its folder and running
>> "zef install .") and removed the line "use lib 'lib';", it might start a
>> lot faster when you run it the second time.
>>
>> Hope that helps!
>>    - Timo
>>
>
>
>
> Not seeing a `use lib`.  :'(
>
> use v6;
> use lib 'lib';
> use GTK::Simple;
> use GTK::Simple::App;


Re: Why so slow?

2019-04-22 Thread ToddAndMargo via perl6-users

On 22/04/2019 03:32, ToddAndMargo via perl6-users wrote:

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


On 4/22/19 3:35 AM, Timo Paulssen wrote:

It's quite possible that when you start that program, you're first
waiting for GTK::Simple to be precompiled; the "use lib 'lib'" can
interfere with the storage of precompilation results. If you have
installed GTK::Simple (for example by going to its folder and running
"zef install .") and removed the line "use lib 'lib';", it might start a
lot faster when you run it the second time.

Hope that helps!
   - Timo





Not seeing a `use lib`.  :'(

use v6;
use lib 'lib';
use GTK::Simple;
use GTK::Simple::App;


Re: Why so slow?

2019-04-22 Thread Timo Paulssen
It's quite possible that when you start that program, you're first
waiting for GTK::Simple to be precompiled; the "use lib 'lib'" can
interfere with the storage of precompilation results. If you have
installed GTK::Simple (for example by going to its folder and running
"zef install .") and removed the line "use lib 'lib';", it might start a
lot faster when you run it the second time.

Hope that helps!
  - Timo

On 22/04/2019 03:32, ToddAndMargo via perl6-users wrote:
> Hi All,
>
> One liners are fast, but my own programs are very slow to start.
>
> I download
>
> https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6
>
> To check it out and it also takes ten second to start.
>
> What gives?
>
> Many thanks,
> -T


Why so slow?

2019-04-21 Thread ToddAndMargo via perl6-users

Hi All,

One liners are fast, but my own programs are very slow to start.

I download

https://github.com/perl6/gtk-simple/blob/master/examples/05-bars.pl6

To check it out and it also takes ten second to start.

What gives?

Many thanks,
-T


[perl #127064] [PERF] Variable interpolation in regex very slow

2018-05-13 Thread Daniel Green via RT
On Tue, 07 Nov 2017 17:14:15 -0800, ddgr...@gmail.com wrote:
> On Tue, 07 Nov 2017 17:10:29 -0800, ddgr...@gmail.com wrote:
> > On Sun, 15 Oct 2017 05:19:54 -0700, ddgr...@gmail.com wrote:
> > > On Thu, 31 Dec 2015 05:39:24 -0800, ju...@jules.uk wrote:
> > > >
> > > >
> > > > On 29/12/2015 23:05, Timo Paulssen via RT wrote:
> > > > > On 12/29/2015 12:46 AM, Jules Field (via RT) wrote:
> > > > >> # New Ticket Created by  Jules Field
> > > > >> # Please include the string:  [perl #127064]
> > > > >> # in the subject line of all future correspondence about this
> > > > >> issue.
> > > > >> # https://rt.perl.org/Ticket/Display.html?id=127064 >
> > > > >>
> > > > >>
> > > > >> Given
> > > > >> my @lines = "some-text.txt".IO.lines;
> > > > >> my $s = 'Jules';
> > > > >> (some-text.txt is about 43k lines)
> > > > >>
> > > > >> Doing
> > > > >> my @matching = @lines.grep(/ $s /);
> > > > >> is about 50 times slower than
> > > > >> my @matching = @lines.grep(/ Jules /);
> > > > >>
> > > > >> And if $s happened to contain anything other than literals, so
> > > > >> I
> > > > >> had
> > > > >> to us
> > > > >> my @matching = @lines.grep(/ <$s> /);
> > > > >> then it's nearly 150 times slower.
> > > > >>
> > > > >> my @matching = @lines.grep($s);
> > > > >> doesn't appear to work. It matches 0 lines but doesn't die.
> > > > >>
> > > > >> The lack of Perl5's straightforward variable interpolation in
> > > > >> regexs
> > > > >> is crippling the speed.
> > > > >> Is there a faster alternative? (other than EVAL to build the
> > > > >> regex)
> > > > >>
> > > > > For now, you can use @lines.grep(*.contains($s)), which will be
> > > > > sufficiently fast.
> > > > >
> > > > > Ideally, our regex optimizer would turn this simple regex into
> > > > > a
> > > > > code
> > > > > that uses .index to find a literal string and construct a match
> > > > > object
> > > > > for that. Or even - if you put a literal "so" in front - turn
> > > > > it
> > > > > into
> > > > > .contains($literal) if it knows that the match object will only
> > > > > be
> > > > > inspected for true/false.
> > > > >
> > > > > Until then, we ought to be able to make interpolation a bit
> > > > > faster.
> > > > >- Timo
> > > > Many thanks for that. I hadn't thought to use Whatever.
> > > >
> > > > I would ideally also be doing case-insensitive regexps, but they
> > > > are
> > > > 50
> > > > times slower than case-sensitive ones, even in trivial cases.
> > > > Maybe a :adverb for rx// that says "give me static (i.e. Perl5-
> > > > style)
> > > > interpolation in this regex"?
> > > > I can see the advantage of passing the variables to the regex
> > > > engine,
> > > > as
> > > > then they can change over time.
> > > >
> > > > But that's not something I want to do very often, far more
> > > > frequently
> > > > I
> > > > just need to construct the regex at run-time and have it go as
> > > > fast
> > > > as
> > > > possible.
> > > >
> > > > Just thoughts from a big Perl5 user (e.g. MailScanner is 50k
> > > > lines
> > > > of
> > > > it!).
> > > >
> > > > Jules
> > >
> > >
> > > I recently attempted to make interpolating into regexes a little
> > > faster. This is what I was using for a benchmark:
> > > perl6 -e 'my @l = "sm.sql".IO.lines; my $s = "Perl6"; my $t = now;
> > > my
> > > @m = @l.grep(/ $s /); say @m.elems; say now - $t'
> > > sm.sql is 10k lines, of which 1283 contain the text "Perl6".
> > >
> > > This is Rakudo version 2017.09 built on MoarVM version 2017.09.1:
> > > / $s / took 5.3s and / <$s> / took 16.5s.
> > >
> > > This is Rakudo version 2017.09-427-gd23a9ba9d built on MoarVM
> > > version
> > > 2017.09.1-595-g716f2277f:
> > > / $s / took 3.2s and / <$s> / took 14.5s.
> > >
> > > However, if you type the string to interpolate it is *much* faster
> > > for
> > > literal interpolation.
> > > perl6 -e 'my @l = "sm.sql".IO.lines; my Str $s = "Perl6"; my $t =
> > > now;
> > > my @m = @l.grep(/ $s /); say @m.elems; say now - $t'
> > > This takes only 0.33s.
> > >
> > > This is still nowhere near as fast as grep(*.contains($s)) though,
> > > which only takes 0.037s.
> >
> >
> > This is Rakudo version 2017.10-143-g0e50993f4 built on MoarVM version
> > 2017.10-58-gad8618468:
> > / $s / took 2.7s and / <$s> / took 7.0s.
> 
> 
> Adding :i (case insensitive adverb), /:i $s / took 3.0s and /:i <$s> /
> took 7.7s.


This is Rakudo version 2018.04.1-76-g9b915f09d built on MoarVM version 
2018.04.1-98-g1aa02fe45
implementing Perl 6.c.
/ $s / took 1.8s and / <$s> / took 2.6s


[perl #127881] [PERF] slow array slicing

2017-12-16 Thread Aleks-Daniel Jakimenko-Aleksejev via RT
FWIW this issue was noticed today:
https://irclog.perlgeek.de/perl6/2017-12-16#i_15587006

On 2016-04-11 20:40:43, ddgr...@gmail.com wrote:
> '@array[0, 3, 7]' is much slower than '(@array[0], @array[3], @array[7])'
>
>
> time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1 <
> $s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push(@a[$i1,
> $i2]) } };'
>
> real 0m14.974s
> user 0m14.947s
> sys 0m0.017s
>
>
> time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1 <
> $s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push((@a[$i1],
> @a[$i2])) } };'
>
> real 0m0.897s
> user 0m0.877s
> sys 0m0.017s
>
>
> With Rakudo version 2016.03 built on MoarVM version 2016.03
>
> Dan


[perl #127064] [PERF] Variable interpolation in regex very slow

2017-11-07 Thread Daniel Green via RT
On Tue, 07 Nov 2017 17:10:29 -0800, ddgr...@gmail.com wrote:
> On Sun, 15 Oct 2017 05:19:54 -0700, ddgr...@gmail.com wrote:
> > On Thu, 31 Dec 2015 05:39:24 -0800, ju...@jules.uk wrote:
> > >
> > >
> > > On 29/12/2015 23:05, Timo Paulssen via RT wrote:
> > > > On 12/29/2015 12:46 AM, Jules Field (via RT) wrote:
> > > >> # New Ticket Created by  Jules Field
> > > >> # Please include the string:  [perl #127064]
> > > >> # in the subject line of all future correspondence about this
> > > >> issue.
> > > >> # https://rt.perl.org/Ticket/Display.html?id=127064 >
> > > >>
> > > >>
> > > >> Given
> > > >> my @lines = "some-text.txt".IO.lines;
> > > >> my $s = 'Jules';
> > > >> (some-text.txt is about 43k lines)
> > > >>
> > > >> Doing
> > > >> my @matching = @lines.grep(/ $s /);
> > > >> is about 50 times slower than
> > > >> my @matching = @lines.grep(/ Jules /);
> > > >>
> > > >> And if $s happened to contain anything other than literals, so I
> > > >> had
> > > >> to us
> > > >> my @matching = @lines.grep(/ <$s> /);
> > > >> then it's nearly 150 times slower.
> > > >>
> > > >> my @matching = @lines.grep($s);
> > > >> doesn't appear to work. It matches 0 lines but doesn't die.
> > > >>
> > > >> The lack of Perl5's straightforward variable interpolation in
> > > >> regexs
> > > >> is crippling the speed.
> > > >> Is there a faster alternative? (other than EVAL to build the
> > > >> regex)
> > > >>
> > > > For now, you can use @lines.grep(*.contains($s)), which will be
> > > > sufficiently fast.
> > > >
> > > > Ideally, our regex optimizer would turn this simple regex into a
> > > > code
> > > > that uses .index to find a literal string and construct a match
> > > > object
> > > > for that. Or even - if you put a literal "so" in front - turn it
> > > > into
> > > > .contains($literal) if it knows that the match object will only
> > > > be
> > > > inspected for true/false.
> > > >
> > > > Until then, we ought to be able to make interpolation a bit
> > > > faster.
> > > >- Timo
> > > Many thanks for that. I hadn't thought to use Whatever.
> > >
> > > I would ideally also be doing case-insensitive regexps, but they
> > > are
> > > 50
> > > times slower than case-sensitive ones, even in trivial cases.
> > > Maybe a :adverb for rx// that says "give me static (i.e. Perl5-
> > > style)
> > > interpolation in this regex"?
> > > I can see the advantage of passing the variables to the regex
> > > engine,
> > > as
> > > then they can change over time.
> > >
> > > But that's not something I want to do very often, far more
> > > frequently
> > > I
> > > just need to construct the regex at run-time and have it go as fast
> > > as
> > > possible.
> > >
> > > Just thoughts from a big Perl5 user (e.g. MailScanner is 50k lines
> > > of
> > > it!).
> > >
> > > Jules
> >
> >
> > I recently attempted to make interpolating into regexes a little
> > faster. This is what I was using for a benchmark:
> > perl6 -e 'my @l = "sm.sql".IO.lines; my $s = "Perl6"; my $t = now; my
> > @m = @l.grep(/ $s /); say @m.elems; say now - $t'
> > sm.sql is 10k lines, of which 1283 contain the text "Perl6".
> >
> > This is Rakudo version 2017.09 built on MoarVM version 2017.09.1:
> > / $s / took 5.3s and / <$s> / took 16.5s.
> >
> > This is Rakudo version 2017.09-427-gd23a9ba9d built on MoarVM version
> > 2017.09.1-595-g716f2277f:
> > / $s / took 3.2s and / <$s> / took 14.5s.
> >
> > However, if you type the string to interpolate it is *much* faster
> > for
> > literal interpolation.
> > perl6 -e 'my @l = "sm.sql".IO.lines; my Str $s = "Perl6"; my $t =
> > now;
> > my @m = @l.grep(/ $s /); say @m.elems; say now - $t'
> > This takes only 0.33s.
> >
> > This is still nowhere near as fast as grep(*.contains($s)) though,
> > which only takes 0.037s.
> 
> 
> This is Rakudo version 2017.10-143-g0e50993f4 built on MoarVM version
> 2017.10-58-gad8618468:
> / $s / took 2.7s and / <$s> / took 7.0s.


Adding :i (case insensitive adverb), /:i $s / took 3.0s and /:i <$s> / took 
7.7s.


[perl #127064] [PERF] Variable interpolation in regex very slow

2017-11-07 Thread Daniel Green via RT
On Sun, 15 Oct 2017 05:19:54 -0700, ddgr...@gmail.com wrote:
> On Thu, 31 Dec 2015 05:39:24 -0800, ju...@jules.uk wrote:
> >
> >
> > On 29/12/2015 23:05, Timo Paulssen via RT wrote:
> > > On 12/29/2015 12:46 AM, Jules Field (via RT) wrote:
> > >> # New Ticket Created by  Jules Field
> > >> # Please include the string:  [perl #127064]
> > >> # in the subject line of all future correspondence about this
> > >> issue.
> > >> # https://rt.perl.org/Ticket/Display.html?id=127064 >
> > >>
> > >>
> > >> Given
> > >> my @lines = "some-text.txt".IO.lines;
> > >> my $s = 'Jules';
> > >> (some-text.txt is about 43k lines)
> > >>
> > >> Doing
> > >> my @matching = @lines.grep(/ $s /);
> > >> is about 50 times slower than
> > >> my @matching = @lines.grep(/ Jules /);
> > >>
> > >> And if $s happened to contain anything other than literals, so I
> > >> had
> > >> to us
> > >> my @matching = @lines.grep(/ <$s> /);
> > >> then it's nearly 150 times slower.
> > >>
> > >> my @matching = @lines.grep($s);
> > >> doesn't appear to work. It matches 0 lines but doesn't die.
> > >>
> > >> The lack of Perl5's straightforward variable interpolation in
> > >> regexs
> > >> is crippling the speed.
> > >> Is there a faster alternative? (other than EVAL to build the
> > >> regex)
> > >>
> > > For now, you can use @lines.grep(*.contains($s)), which will be
> > > sufficiently fast.
> > >
> > > Ideally, our regex optimizer would turn this simple regex into a
> > > code
> > > that uses .index to find a literal string and construct a match
> > > object
> > > for that. Or even - if you put a literal "so" in front - turn it
> > > into
> > > .contains($literal) if it knows that the match object will only be
> > > inspected for true/false.
> > >
> > > Until then, we ought to be able to make interpolation a bit faster.
> > >- Timo
> > Many thanks for that. I hadn't thought to use Whatever.
> >
> > I would ideally also be doing case-insensitive regexps, but they are
> > 50
> > times slower than case-sensitive ones, even in trivial cases.
> > Maybe a :adverb for rx// that says "give me static (i.e. Perl5-style)
> > interpolation in this regex"?
> > I can see the advantage of passing the variables to the regex engine,
> > as
> > then they can change over time.
> >
> > But that's not something I want to do very often, far more frequently
> > I
> > just need to construct the regex at run-time and have it go as fast
> > as
> > possible.
> >
> > Just thoughts from a big Perl5 user (e.g. MailScanner is 50k lines of
> > it!).
> >
> > Jules
> 
> 
> I recently attempted to make interpolating into regexes a little
> faster. This is what I was using for a benchmark:
> perl6 -e 'my @l = "sm.sql".IO.lines; my $s = "Perl6"; my $t = now; my
> @m = @l.grep(/ $s /); say @m.elems; say now - $t'
> sm.sql is 10k lines, of which 1283 contain the text "Perl6".
> 
> This is Rakudo version 2017.09 built on MoarVM version 2017.09.1:
> / $s / took 5.3s and / <$s> / took 16.5s.
> 
> This is Rakudo version 2017.09-427-gd23a9ba9d built on MoarVM version
> 2017.09.1-595-g716f2277f:
> / $s / took 3.2s and / <$s> / took 14.5s.
> 
> However, if you type the string to interpolate it is *much* faster for
> literal interpolation.
> perl6 -e 'my @l = "sm.sql".IO.lines; my Str $s = "Perl6"; my $t = now;
> my @m = @l.grep(/ $s /); say @m.elems; say now - $t'
> This takes only 0.33s.
> 
> This is still nowhere near as fast as grep(*.contains($s)) though,
> which only takes 0.037s.


This is Rakudo version 2017.10-143-g0e50993f4 built on MoarVM version 
2017.10-58-gad8618468:
/ $s / took 2.7s and / <$s> / took 7.0s.


[perl #127064] [PERF] Variable interpolation in regex very slow

2017-10-15 Thread Daniel Green via RT
On Thu, 31 Dec 2015 05:39:24 -0800, ju...@jules.uk wrote:
> 
> 
> On 29/12/2015 23:05, Timo Paulssen via RT wrote:
> > On 12/29/2015 12:46 AM, Jules Field (via RT) wrote:
> >> # New Ticket Created by  Jules Field
> >> # Please include the string:  [perl #127064]
> >> # in the subject line of all future correspondence about this issue.
> >> # https://rt.perl.org/Ticket/Display.html?id=127064 >
> >>
> >>
> >> Given
> >> my @lines = "some-text.txt".IO.lines;
> >> my $s = 'Jules';
> >> (some-text.txt is about 43k lines)
> >>
> >> Doing
> >> my @matching = @lines.grep(/ $s /);
> >> is about 50 times slower than
> >> my @matching = @lines.grep(/ Jules /);
> >>
> >> And if $s happened to contain anything other than literals, so I had
> >> to us
> >> my @matching = @lines.grep(/ <$s> /);
> >> then it's nearly 150 times slower.
> >>
> >> my @matching = @lines.grep($s);
> >> doesn't appear to work. It matches 0 lines but doesn't die.
> >>
> >> The lack of Perl5's straightforward variable interpolation in regexs
> >> is crippling the speed.
> >> Is there a faster alternative? (other than EVAL to build the regex)
> >>
> > For now, you can use @lines.grep(*.contains($s)), which will be
> > sufficiently fast.
> >
> > Ideally, our regex optimizer would turn this simple regex into a code
> > that uses .index to find a literal string and construct a match
> > object
> > for that. Or even - if you put a literal "so" in front - turn it into
> > .contains($literal) if it knows that the match object will only be
> > inspected for true/false.
> >
> > Until then, we ought to be able to make interpolation a bit faster.
> >- Timo
> Many thanks for that. I hadn't thought to use Whatever.
> 
> I would ideally also be doing case-insensitive regexps, but they are
> 50
> times slower than case-sensitive ones, even in trivial cases.
> Maybe a :adverb for rx// that says "give me static (i.e. Perl5-style)
> interpolation in this regex"?
> I can see the advantage of passing the variables to the regex engine,
> as
> then they can change over time.
> 
> But that's not something I want to do very often, far more frequently
> I
> just need to construct the regex at run-time and have it go as fast as
> possible.
> 
> Just thoughts from a big Perl5 user (e.g. MailScanner is 50k lines of
> it!).
> 
> Jules


I recently attempted to make interpolating into regexes a little faster. This 
is what I was using for a benchmark:
perl6 -e 'my @l = "sm.sql".IO.lines; my $s = "Perl6"; my $t = now; my @m = 
@l.grep(/ $s /); say @m.elems; say now - $t'
sm.sql is 10k lines, of which 1283 contain the text "Perl6".

This is Rakudo version 2017.09 built on MoarVM version 2017.09.1:
/ $s / took 5.3s and / <$s> / took 16.5s.

This is Rakudo version 2017.09-427-gd23a9ba9d built on MoarVM version 
2017.09.1-595-g716f2277f:
/ $s / took 3.2s and / <$s> / took 14.5s.

However, if you type the string to interpolate it is *much* faster for literal 
interpolation.
perl6 -e 'my @l = "sm.sql".IO.lines; my Str $s = "Perl6"; my $t = now; my @m = 
@l.grep(/ $s /); say @m.elems; say now - $t'
This takes only 0.33s.

This is still nowhere near as fast as grep(*.contains($s)) though, which only 
takes 0.037s.


Re: [perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2017-10-11 Thread zof...@zoffix.com via RT

Quoting Parrot Raiser <1parr...@gmail.com>:

> Round trips to the OS, like repeated "says", have been slow relative
> to internal processes since the dawn of time.

In this case, there are no trips. It's the *parsing* time that's slow.  
In fact, if you change it to `$ = ‘a’;` it's equally slow.

Looking at this now, with an extra year of knowledge, it makes perfect  
sense for this to be unusually slower: each line switches back and  
forth from Main language braid, to Quote language braid, and then back  
to Main.


> For a core developer it's kinda hard to forget about parsing being  
> slow anyway :)

Indeed, we all pay a ~70s of wait time to parse core code after  
*every* change.


I'm actually debugging a bug that requires me to learn how the braid  
switcher works; I'll take a look if the switch can be improved—don't  
hold your breath tho, I'm a n00b.


[perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2017-10-11 Thread Zoffix Znet via RT
On Wed, 11 Oct 2017 10:29:41 -0700, zof...@zoffix.com wrote:
> Looking at this now, with an extra year of knowledge, it makes perfect  
> sense for this to be unusually slower: each line switches back and  
> forth from Main language braid, to Quote language braid, and then back  
> to Main.

I was wrong. The braid switch makes virtually no difference and slowness is 
present even with 1 repeated `1;`


[perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2017-10-11 Thread Zoffix Znet via RT
On Wed, 11 Oct 2017 10:29:41 -0700, zof...@zoffix.com wrote:
> Looking at this now, with an extra year of knowledge, it makes perfect  
> sense for this to be unusually slower: each line switches back and  
> forth from Main language braid, to Quote language braid, and then back  
> to Main.

I was wrong. The braid switch makes virtually no difference and slowness is 
present even with 1 repeated `1;`


Re: [perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2017-10-11 Thread zoffix


Quoting Parrot Raiser <1parr...@gmail.com>:


Round trips to the OS, like repeated "says", have been slow relative
to internal processes since the dawn of time.


In this case, there are no trips. It's the *parsing* time that's slow.  
In fact, if you change it to `$ = ‘a’;` it's equally slow.


Looking at this now, with an extra year of knowledge, it makes perfect  
sense for this to be unusually slower: each line switches back and  
forth from Main language braid, to Quote language braid, and then back  
to Main.



For a core developer it's kinda hard to forget about parsing being  
slow anyway :)


Indeed, we all pay a ~70s of wait time to parse core code after  
*every* change.



I'm actually debugging a bug that requires me to learn how the braid  
switcher works; I'll take a look if the switch can be improved—don't  
hold your breath tho, I'm a n00b.


Re: [perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2017-10-11 Thread Aleks-Daniel Jakimenko-Aleksejev
Your comment makes no sense because almost all of that time is spent on the
compilation. Have you tried --stagestats like suggested above?

On Wed, Oct 11, 2017 at 7:43 PM, Parrot Raiser <1parr...@gmail.com> wrote:

> Round trips to the OS, like repeated "says", have been slow relative
> to internal processes since the dawn of time.
>


Re: [perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2017-10-11 Thread Parrot Raiser
Round trips to the OS, like repeated "says", have been slow relative
to internal processes since the dawn of time.


[perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2017-10-10 Thread Aleks-Daniel Jakimenko-Aleksejev via RT
FWIW it went from 4.1724 to 3.3393 (2015.12→HEAD(a72214c)) for 5k of says.
On 2016-01-24 06:28:38, n...@detonation.org wrote:
> I don't think it's related to subs. It's compilation in general that's
> rather slow, most of all parsing.
>
> nine@sphinx:~> for x in {1..1}; do echo 'say ‘a’;' >> print.p6;
> done
> nine@sphinx:~> time perl6 --stagestats print.p6 > /dev/null
> Stage start : 0.000
> Stage parse : 5.442
> Stage syntaxcheck: 0.000
> Stage ast : 0.000
> Stage optimize : 0.503
> Stage mast : 1.035
> Stage mbc : 0.102
> Stage moar : 0.000
>
> real 0m7.232s
> user 0m7.152s
> sys 0m0.076s
>
> nine@sphinx:~> for x in {1..1}; do echo "my \$a$x = ‘a’;" >>
> str.p6; done
> nine@sphinx:~> time perl6 --stagestats str.p6 > /dev/null
> Stage start : 0.000
> Stage parse : 8.204
> Stage syntaxcheck: 0.000
> Stage ast : 0.000
> Stage optimize : 0.351
> Stage mast : 0.885
> Stage mbc : 0.014
> Stage moar : 0.000
>
> real 0m9.543s
> user 0m9.480s
> sys 0m0.060s
>
> I'm closing this ticket, as compiling these examples will most
> probably be sped up by general improvements to the compiler and
> optimization infrastructure and not something that's specifically
> targeted at the examples. For a core developer it's kinda hard to
> forget about parsing being slow anyway :)


[perl #125344] [PERF] Int..Whatever ranges are slow (~20 times slower than Int..Int)

2017-10-08 Thread Zoffix Znet via RT
On Sat, 07 Oct 2017 22:19:46 -0700, alex.jakime...@gmail.com wrote:
> why can't it just pretend that * is @array.end?

Done now in https://github.com/rakudo/rakudo/commit/456358e3c3


[perl #125344] [PERF] Int..Whatever ranges are slow (~20 times slower than Int..Int)

2017-10-08 Thread Zoffix Znet via RT
On Sat, 07 Oct 2017 22:19:46 -0700, alex.jakime...@gmail.com wrote:
> why can't it just pretend that * is @array.end?

Done now in https://github.com/rakudo/rakudo/commit/456358e3c3


[perl #125344] [PERF] Int..Whatever ranges are slow (~20 times slower than Int..Int)

2017-10-07 Thread Aleks-Daniel Jakimenko-Aleksejev via RT
So-o. Zoffix insists that everything is correct here, and perhaps it is so.
That being said, I don't quite understand why it can't be done. Maybe somebody
else can take a look also. Here's my logic:

So if you have

@array[0...@array.end]

and

@array[0..*]

Would we get identical results from both of these snippets? I guess we should.
But if it is so, why can't it just pretend that * is @array.end? If I
understand correctly, the range object is right there for inspection, so
everything should be possible.

Zoffix++ pointed out ( https://irclog.perlgeek.de/perl6/2017-10-08#i_15273200 )
that 0..* will stop at the first hole, but there's a ticket for that:
https://rt.perl.org/Ticket/Display.html?id=127573

So assuming that the bug is eventually resolved and both @array[0...@array.end]
and @array[0..*] start giving exactly the same result in all cases, why can't
it be done? And if these two snippets give different results, then what are
these difference? This would be some good doc material.

If there's a chance that it can be done, let's keep this open. The difference
is dramatic and [0..*] is very common in normal code.

On 2017-10-07 21:44:40, c...@zoffix.com wrote:
> On Sat, 06 Jun 2015 20:24:32 -0700, r...@hoelz.ro wrote:
> > Let's say I have an array where @array.end == $end. Using
> > @array[0..$end] is about 20 times faster than @array[0..*]. I have
> > attached an example script that demonstrates this.
>
>
> Thanks for the report, however there's no issue with the compiler
> here.
>
> You're comparing different behaviours: 0..* is a lazy Iterable, so
> extra checks
> are performed to stop iteration of indices once an end is reached. If
> you make 0..$end lazy,
> so the same checks are performed, the perf difference actually gives
> 0..* is small edge (likely due
> to optimized Ranger.iterator)
>
>  m: my @array = 1..1000; my $end = @array.end; for ^500 { $
> = @array[0..*] }; say now - INIT now
>  rakudo-moar 39d50a: OUTPUT: «3.050471␤»
>  m: my @array = 1..1000; my $end = @array.end; for ^500 { $
> = @array[lazy 0..$end] }; say now - INIT now
>  rakudo-moar 39d50a: OUTPUT: «3.3454447␤»


[perl #125344] [PERF] Int..Whatever ranges are slow (~20 times slower than Int..Int)

2017-10-07 Thread Zoffix Znet via RT
On Sat, 06 Jun 2015 20:24:32 -0700, r...@hoelz.ro wrote:
> Let's say I have an array where @array.end == $end.  Using
> @array[0..$end] is about 20 times faster than @array[0..*].  I have
> attached an example script that demonstrates this.


Thanks for the report, however there's no issue with the compiler here.

You're comparing different behaviours: 0..* is a lazy Iterable, so extra checks
are performed to stop iteration of indices once an end is reached. If you make 
0..$end lazy,
so the same checks are performed, the perf difference actually gives 0..* is 
small edge (likely due
to optimized Ranger.iterator)

 m: my @array = 1..1000; my $end = @array.end; for ^500 { $ = 
@array[0..*] }; say now - INIT now
 rakudo-moar 39d50a: OUTPUT: «3.050471␤»
 m: my @array = 1..1000; my $end = @array.end; for ^500 { $ = 
@array[lazy 0..$end] }; say now - INIT now
 rakudo-moar 39d50a: OUTPUT: «3.3454447␤»


[perl #125344] [PERF] Int..Whatever ranges are slow (~20 times slower than Int..Int)

2017-10-07 Thread Zoffix Znet via RT
On Sat, 06 Jun 2015 20:24:32 -0700, r...@hoelz.ro wrote:
> Let's say I have an array where @array.end == $end.  Using
> @array[0..$end] is about 20 times faster than @array[0..*].  I have
> attached an example script that demonstrates this.


Thanks for the report, however there's no issue with the compiler here.

You're comparing different behaviours: 0..* is a lazy Iterable, so extra checks
are performed to stop iteration of indices once an end is reached. If you make 
0..$end lazy,
so the same checks are performed, the perf difference actually gives 0..* is 
small edge (likely due
to optimized Ranger.iterator)

 m: my @array = 1..1000; my $end = @array.end; for ^500 { $ = 
@array[0..*] }; say now - INIT now
 rakudo-moar 39d50a: OUTPUT: «3.050471␤»
 m: my @array = 1..1000; my $end = @array.end; for ^500 { $ = 
@array[lazy 0..$end] }; say now - INIT now
 rakudo-moar 39d50a: OUTPUT: «3.3454447␤»


DateTime.Str default formatter with sprintf is pretty slow

2017-09-19 Thread Thor Michael Støre
Hey,

When I profiled my "read CSV, munge, write CSV" script to see why it is a bit 
on the slow side DateTime.Str stood out. I saw the default formatter eventually 
reached sprintf, which consumed a lot of time. Each output line from my script 
has one date and time in ISO 8601 format, and when I changed the script so that 
they were stringified without sprintf the total runtime dropped from ~7 to ~4.5 
seconds on a short run that converts 800 lines to 1100.

A test case for this:

#!/usr/bin/env perl6

# Arg: Quick DateTime
sub MAIN( :$qDT = False ) {
while my $x++ < 1000 {
say DateTime.new( year => 2017, hour => (^24).roll, minute => 
(^60).roll,
  formatter => ( $qDT ??  !! Callable ) );
}
}

sub formatISO8601( DateTime $dt ) {
( 0 > $dt.year ?? '-'~ zPad( $dt.year.abs, 4 ) !!
  $dt.year <=  ?? zPad( $dt.year, 4 )
   !! '+'~ zPad( $dt.year, 5 ) ) ~'-'~
zPad( $dt.month, 2 ) ~'-'~ zPad( $dt.day, 2 ) ~'T'~
zPad( $dt.hour, 2 ) ~':'~ zPad( $dt.minute, 2 ) ~':'~
   ( $dt.second.floor == $dt.second
   ?? zPad( $dt.second.Int, 2 )
   !! $dt.second.fmt('%09.6f') )
 ~
 ( $dt.timezone == 0
   ?? 'Z'
   !! $dt.timezone > 0
  ?? ( '+' ~ zPad( ($dt.timezone/3600).floor, 2 ) ~':'~
 zPad( ($dt.timezone/60%60).floor, 2 ) )
  !! ( '-' ~ zPad( ($dt.timezone.abs/3600).floor, 2 ) ~':'~
 zPad( ($dt.timezone.abs/60%60).floor, 2 ) ) )
}

sub zPad($s, $p) {'0' x $p - $s.chars ~ $s}

Running that with —qDT results in a bit less than a 4x speedup for me, with 
results piped to /dev/null. I wouldn’t know whether doing something like this 
in general is a good idea, of if sprintf/fmt should be improved.

Putting together that example, I noticed that Date.today also seems slow, 
taking about twice as long as DateTime.new. Replacing "year => 2017” with "date 
=> Date.today" in the example above slows it down by about 400ms on my (slow) 
laptop computer. I don't call that in a loop like in this in my actual script 
anyhow, so that's no biggie for me, but it’s the sort of thing that might get 
put in a tight loop so it’s something to be aware of.

---
Thor Michael Støre
thormich...@gmail.com
Database guy in Kongsberg, Norway
Mobile: +47 97 15 14 09



[perl #131721] [PERF][REGRESSION] calling non-existing methods with try() is very slow

2017-07-10 Thread via RT
# New Ticket Created by  Shoichi Kaji 
# Please include the string:  [perl #131721]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=131721 >


I've noticed that with recent rakudo (2017.06-172-gc76d9324a),
it takes a lot of time to call non-existing methods with try().

(You can also find the following code in
https://gist.github.com/skaji/b597ec5ecef28449527bb746ab53bd30 )

```
#!/usr/bin/env perl6
use v6;

class Foo { }

my $start = now;

for ^30 {
try Foo.not-exists;
}

printf "Took %.3f sec\n", now - $start;
```

rakudo 2017.04 ---> 0.004 sec
```
❯ ~/env/rakudobrew/moar-2017.04/install/bin/perl6 -v
This is Rakudo version 2017.04 built on MoarVM version 2017.04
implementing Perl 6.c.
❯ ~/env/rakudobrew/moar-2017.04/install/bin/perl6 test.p6
Took 0.004 sec
```

rakudo 2017.06-172-gc76d9324a ---> 4.903 sec
```
❯ ~/env/rakudobrew/moar-nom/install/bin/perl6 -v
This is Rakudo version 2017.06-172-gc76d9324a built on MoarVM version
2017.06-43-g45b008fb
implementing Perl 6.c.
❯ ~/env/rakudobrew/moar-nom/install/bin/perl6 test.p6
Took 4.903 sec
```

Please note that:
* for ^30 { try die; } is not slow.
* ugexe pointed out that Foo.?not-exists; is still fast it seems though


Re: [perl #129776] [CUR][PERF] Extremely slow performance when running in directory with lots of modules

2016-10-06 Thread Elizabeth Mattijsen
By re-implementing Rakudo::Internals.DIR-RECURSE, the time has gone down to 
about a second.  This is still calculating a SHA1 of ~1000 files for the 
perl6/mu case, but the overhead of scanning the directories is now much less 
because directory entries are pruned much earlier, and directories like .git 
are now skipped (they weren’t before).  And there also some junctions in there 
that weren’t strictly necessary.  Also dir() got a bit faster in the process as 
well (although DIR-RECURSE now no longer uses dir()).


> On 02 Oct 2016, at 20:10, Timo Paulssen  wrote:
> 
> Sorry, I was running the profile on a 4-weeks-old rakudo. After the
> optimizations i did to canonpath ~22 days ago the canonpath inclusive
> time went down to about 18% ...
> 
> FILETEST-D and FILETEST-F are in spots 3 and 4, but they only take 3594
> / 26881 msec and 2749 / 216298 msec per invocation, so they are just
> called really, really often.
> 
> next up is the loop from dir(), the one inside the gather. it sits at
> 10.44% inclusive time and 3.16% exclusive time.
> 
> dir itself takes 17.8% inclusive time and 3.12% exclusive time, which
> seems to suggest it has a bit of overhead i'm not quite sure where to find.
> 
> match is up next at about 8.8% inclusive and 2.91% exclusive. I'm not
> sure where exactly regex matches happen here; maybe a part of it comes
> from canonpath.
> 
> AUTOTHREAD is at 12% inclusive time, which i find interesting. Perhaps
> the reason is the test for dir, which by default is a none-junction.
> Perhaps we can just use a little closure instead that eq's against . and
> .. instead of going through a somewhat-more-expensive junction autothread.
> 
> That's all my thoughts for now.



Re: [perl #129776] [CUR][PERF] Extremely slow performance when running in directory with lots of modules

2016-10-02 Thread Timo Paulssen
Sorry, I was running the profile on a 4-weeks-old rakudo. After the
optimizations i did to canonpath ~22 days ago the canonpath inclusive
time went down to about 18% ...

FILETEST-D and FILETEST-F are in spots 3 and 4, but they only take 3594
/ 26881 msec and 2749 / 216298 msec per invocation, so they are just
called really, really often.

next up is the loop from dir(), the one inside the gather. it sits at
10.44% inclusive time and 3.16% exclusive time.

dir itself takes 17.8% inclusive time and 3.12% exclusive time, which
seems to suggest it has a bit of overhead i'm not quite sure where to find.

match is up next at about 8.8% inclusive and 2.91% exclusive. I'm not
sure where exactly regex matches happen here; maybe a part of it comes
from canonpath.

AUTOTHREAD is at 12% inclusive time, which i find interesting. Perhaps
the reason is the test for dir, which by default is a none-junction.
Perhaps we can just use a little closure instead that eq's against . and
.. instead of going through a somewhat-more-expensive junction autothread.

That's all my thoughts for now.


Re: [perl #129776] [CUR][PERF] Extremely slow performance when running in directory with lots of modules

2016-10-02 Thread Timo Paulssen
Here's the results from a --profile-compile:

1. match (gen/moar/m-CORE.setting:12064), 12750 entries, 25.66%
inclusive time, 8.15% exclusive time
2.  (gen/moar/m-BOOTSTRAP.nqp:2081), 111530 entries, 4.59%
inclusive time, 4.36% exclusive time
3.  (gen/moar/m-CORE.setting:40776), 13148 entries, 6.85%
inclusive, 4.16% exclusive time
4. canonpath (gen/moar/m-CORE.setting:27919), 25500 entries, 40%
inclusive time, 3.12% exclusive time
5.  (gen/moar/m-CORE.setting:27924), 25501 entries, 2.99%
inclusive time, 2.9% exclusive time
6. FILETEST-D (gen/moar/m-CORE.setting:3909), 68721 entries, 2.73%
inclusive time, 2.73% exclusive time
7. STORE (gen/moar/m-CORE.setting:16678), 204002 entries, 8.21%
inclusive time, 2.42% exclusive time

I think the fourth spot, "canonpath" taking 40% inclusive time, is
what's messing us up here. If we could avoid running it as often, we
could get significant savings.


Re: [perl #129776] [CUR][PERF] Extremely slow performance when running in directory with lots of modules

2016-10-02 Thread Timo Paulssen
On 02/10/16 04:41, Lloyd Fournier wrote:
> String concat takes On2 in rakudo I think. Using join in this kind of
> situation should be an improvement. (I'm commuting so can't test).

MoarVM implements "ropes" which make the performance a whole lot better.
join can still be a small improvement, but not as much as it would if
string concat was actually O(n^2).


Re: [perl #129776] [CUR][PERF] Extremely slow performance when running in directory with lots of modules

2016-10-01 Thread Lloyd Fournier
String concat takes On2 in rakudo I think. Using join in this kind of
situation should be an improvement. (I'm commuting so can't test).
On Sat, 1 Oct 2016 at 7:11 PM, Zoffix Znet 
wrote:

> # New Ticket Created by  Zoffix Znet
> # Please include the string:  [perl #129776]
> # in the subject line of all future correspondence about this issue.
> # https://rt.perl.org/Ticket/Display.html?id=129776 >
>
>
> To reproduce:
>
>  cd $(mktemp -d); git clone https://github.com/perl6/mu; touch
> Foo.pm6;
>  perl6 -I. -MFoo -e ''
>
> The perl6 command will appear to hang, although it will finish after a
> long time.
>
> The problem is due to repo ID being calculated [^1] by slurping ALL of
> the .pm6? files in the directory and subdirectories,
> concatenating all that date, and calculating sha for it.
>
> To fix the issue, and different method of coming up with the repo ID
> needs to be used.
>
> [1]
>
> https://github.com/rakudo/rakudo/blob/nom/src/core/CompUnit/Repository/FileSystem.pm#L63
>
>
>


[perl #129776] [CUR][PERF] Extremely slow performance when running in directory with lots of modules

2016-10-01 Thread via RT
# New Ticket Created by  Zoffix Znet 
# Please include the string:  [perl #129776]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=129776 >


To reproduce:

 cd $(mktemp -d); git clone https://github.com/perl6/mu; touch Foo.pm6;
 perl6 -I. -MFoo -e ''

The perl6 command will appear to hang, although it will finish after a  
long time.

The problem is due to repo ID being calculated [^1] by slurping ALL of  
the .pm6? files in the directory and subdirectories,
concatenating all that date, and calculating sha for it.

To fix the issue, and different method of coming up with the repo ID  
needs to be used.

[1]  
https://github.com/rakudo/rakudo/blob/nom/src/core/CompUnit/Repository/FileSystem.pm#L63




Re: [perl #128760] [BUG] Adding postcircumfix operators makes compilation unacceptably slow

2016-07-28 Thread Elizabeth Mattijsen
Some more discussion about this at:

http://irclog.perlgeek.de/perl6-dev/2016-07-23#i_12894457

> On 28 Jul 2016, at 02:47, Damian Conway (via RT) 
>  wrote:
> 
> # New Ticket Created by  Damian Conway 
> # Please include the string:  [perl #128760]
> # in the subject line of all future correspondence about this issue. 
> # https://rt.perl.org/Ticket/Display.html?id=128760 >
> 
> 
>> perl6 --version
> This is Rakudo version 2016.07.1 built on MoarVM version 2016.07
> implementing Perl 6.c.
> 
> Adding a new postcircumfix operator increases compile-time by over 1 second
> per operator definition.
> For example:
> 
> BEGIN my $start = now;
> BEGIN { say "Compiling..." };
> INIT  { say "Compiled after {now - $start} seconds"; }
> END   { say "Done after {now - $start} seconds"; }
> 
> multi postcircumfix:« ⦋ ⦌ » (Any $prob, Any $state) {
>$state => $prob
> }
> BEGIN  { say "First definition after {now - $start} seconds"; }
> 
> multi postcircumfix:« ⦋ ⦌ » (Numeric $prob, Any $state) {
>$state => $prob
> }
> BEGIN  { say "Second definition after {now - $start} seconds"; }
> 
> multi postcircumfix:« ⦋ ⦌ » (Str $prob, Any $state) {
>$state => $prob
> }
> BEGIN  { say "Third definition after {now - $start} seconds"; }
> 
> multi postcircumfix:« ⦋ ⦌ » (Int $prob, Any $state) {
>$state => $prob
> }
> BEGIN  { say "Fourth definition after {now - $start} seconds"; }
> 
> say   0.5⦋'cat'⦌;
> say 'cat'⦋0.5⦌;
> say 'cat'⦋'cat'⦌;
> say 1⦋'cat'⦌;
> say 1⦋1⦌;



[perl #128760] [BUG] Adding postcircumfix operators makes compilation unacceptably slow

2016-07-27 Thread via RT
# New Ticket Created by  Damian Conway 
# Please include the string:  [perl #128760]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=128760 >


> perl6 --version
This is Rakudo version 2016.07.1 built on MoarVM version 2016.07
implementing Perl 6.c.

Adding a new postcircumfix operator increases compile-time by over 1 second
per operator definition.
For example:

BEGIN my $start = now;
BEGIN { say "Compiling..." };
INIT  { say "Compiled after {now - $start} seconds"; }
END   { say "Done after {now - $start} seconds"; }

multi postcircumfix:« ⦋ ⦌ » (Any $prob, Any $state) {
$state => $prob
}
BEGIN  { say "First definition after {now - $start} seconds"; }

multi postcircumfix:« ⦋ ⦌ » (Numeric $prob, Any $state) {
$state => $prob
}
BEGIN  { say "Second definition after {now - $start} seconds"; }

multi postcircumfix:« ⦋ ⦌ » (Str $prob, Any $state) {
$state => $prob
}
BEGIN  { say "Third definition after {now - $start} seconds"; }

multi postcircumfix:« ⦋ ⦌ » (Int $prob, Any $state) {
$state => $prob
}
BEGIN  { say "Fourth definition after {now - $start} seconds"; }

say   0.5⦋'cat'⦌;
say 'cat'⦋0.5⦌;
say 'cat'⦋'cat'⦌;
say 1⦋'cat'⦌;
say 1⦋1⦌;


[perl #128388] Callable.assuming() is slow

2016-06-11 Thread Hinrik Örn Sigurðsson
# New Ticket Created by  "Hinrik Örn Sigurðsson" 
# Please include the string:  [perl #128388]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=128388 >


Calling .assuming() on a subroutine is way too slow for general usage.

$ time perl6 -e'sub foo { my  = (); }; for 1..100 -> $x { 
foo() }'

real0m4.235s
user0m4.192s
sys 0m0.044s

$ time perl6 -e'sub foo { }; for 1..100 -> $x { foo() }'
real0m0.106s
user0m0.068s
sys 0m0.036s


Re: [perl #127881] [BUG] slow array slicing

2016-04-13 Thread Daniel Green
Where in Rakudo is the slowdown? I'm by no means a compiler developer, but
I enjoy tinkering.

Dan

On Wed, Apr 13, 2016 at 8:04 AM, Elizabeth Mattijsen via RT <
perl6-bugs-follo...@perl.org> wrote:

> > On 12 Apr 2016, at 04:40, Daniel Green (via RT) <
> perl6-bugs-follo...@perl.org> wrote:
> >
> > # New Ticket Created by  Daniel Green
> > # Please include the string:  [perl #127881]
> > # in the subject line of all future correspondence about this issue.
> > # https://rt.perl.org/Ticket/Display.html?id=127881 >
> >
> >
> > '@array[0, 3, 7]' is much slower than '(@array[0], @array[3], @array[7])'
> >
> >
> > time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1
> <
> > $s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push(@a[$i1,
> > $i2]) } };'
> >
> > real0m14.974s
> > user0m14.947s
> > sys 0m0.017s
> >
> >
> > time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1
> <
> > $s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push((@a[$i1],
> > @a[$i2])) } };'
> >
> > real0m0.897s
> > user0m0.877s
> > sys 0m0.017s
> >
> >
> > With Rakudo version 2016.03 built on MoarVM version 2016.03
>
> This is a known issue.  Anytime you use lists, the optimization paths are
> not that well developed yet.  To give you another example:
>
> $ 6 'for ^10 { my ($a,$b) = 42,666 }'
> real0m1.628s
> user0m1.582s
> sys 0m0.041s
>
> $ 6 'for ^10 { my $a = 42; my $b = 666 }'
> real0m0.180s
> user0m0.152s
> sys 0m0.026s
>
> $ 6 'for ^10 { my $a; my $b }'
> real0m0.170s
> user0m0.144s
> sys 0m0.025s
>
> $ 6 'say (1628 - 170) / (180 - 170)’
> 145.8
>
>
> Liz
>
>


Re: [perl #127881] [BUG] slow array slicing

2016-04-13 Thread Elizabeth Mattijsen
> On 12 Apr 2016, at 04:40, Daniel Green (via RT) 
>  wrote:
> 
> # New Ticket Created by  Daniel Green 
> # Please include the string:  [perl #127881]
> # in the subject line of all future correspondence about this issue. 
> # https://rt.perl.org/Ticket/Display.html?id=127881 >
> 
> 
> '@array[0, 3, 7]' is much slower than '(@array[0], @array[3], @array[7])'
> 
> 
> time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1 <
> $s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push(@a[$i1,
> $i2]) } };'
> 
> real0m14.974s
> user0m14.947s
> sys 0m0.017s
> 
> 
> time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1 <
> $s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push((@a[$i1],
> @a[$i2])) } };'
> 
> real0m0.897s
> user0m0.877s
> sys 0m0.017s
> 
> 
> With Rakudo version 2016.03 built on MoarVM version 2016.03

This is a known issue.  Anytime you use lists, the optimization paths are not 
that well developed yet.  To give you another example:

$ 6 'for ^10 { my ($a,$b) = 42,666 }'
real0m1.628s
user0m1.582s
sys 0m0.041s

$ 6 'for ^10 { my $a = 42; my $b = 666 }'
real0m0.180s
user0m0.152s
sys 0m0.026s

$ 6 'for ^10 { my $a; my $b }'
real0m0.170s
user0m0.144s
sys 0m0.025s

$ 6 'say (1628 - 170) / (180 - 170)’
145.8


Liz

[perl #127881] [BUG] slow array slicing

2016-04-12 Thread via RT
# New Ticket Created by  Daniel Green 
# Please include the string:  [perl #127881]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=127881 >


'@array[0, 3, 7]' is much slower than '(@array[0], @array[3], @array[7])'


time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1 <
$s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push(@a[$i1,
$i2]) } };'

real0m14.974s
user0m14.947s
sys 0m0.017s


time perl6 -e 'my @a = ^500;my @f;my $s = @a.elems;loop (my $i1 = 0; $i1 <
$s-1; $i1++) { loop (my $i2 = $i1+1; $i2 < $s; $i2++) { @f.push((@a[$i1],
@a[$i2])) } };'

real0m0.897s
user0m0.877s
sys 0m0.017s


With Rakudo version 2016.03 built on MoarVM version 2016.03

Dan


Re: perl6 --profile-compile | --profile: both very slow and depend on Internet resources

2016-04-01 Thread Tom Browder
On Fri, Apr 1, 2016 at 10:08 AM, Tom Browder  wrote:
> On Fri, Apr 1, 2016 at 9:39 AM, Timo Paulssen  wrote:
>> The profiler's data blob is a massive, gigantic blob of json (ls the file
>> and you'll see).
>
> Ah, yes:  a 2.8+ million character line!
...
> What about creating a text output in a format something like gprof?
> It looks like tadzik has some code that could be used as a start.

And look at something like:

  https://github.com/open-source-parsers/jsoncpp

to use as the JSON C++ library.

-Tom


Re: perl6 --profile-compile | --profile: both very slow and depend on Internet resources

2016-04-01 Thread Timo Paulssen



On 01/04/16 17:08, Tom Browder wrote:



Alternatively, there's a qt-based profiler up on tadzik's github that can
read the json blob (you will have to --profile-filename=blahblah.json to get
that), but it doesn't evaluate as much of the data - it'll potentially even
fail completely what with the recent changes i made :S

...

That looks like the place to start to me...


The one big problem with the qt-based profiler is that the json library 
it uses refuses to work with files over a certain limit, and we easily 
reach that limit. So that's super dumb :(

  - Timo


Re: perl6 --profile-compile | --profile: both very slow and depend on Internet resources

2016-04-01 Thread Tom Browder
On Fri, Apr 1, 2016 at 9:39 AM, Timo Paulssen  wrote:
> The profiler's data blob is a massive, gigantic blob of json (ls the file
> and you'll see).

Ah, yes:  a 2.8+ million character line!

> You can easily search the urls to point at local files instead of
> the CDN.
...
> Alternatively, there's a qt-based profiler up on tadzik's github that can
> read the json blob (you will have to --profile-filename=blahblah.json to get
> that), but it doesn't evaluate as much of the data - it'll potentially even
> fail completely what with the recent changes i made :S
...

That looks like the place to start to me...

What about creating a text output in a format something like gprof?
It looks like tadzik has some code that could be used as a start.

-Tom


Re: perl6 --profile-compile | --profile: both very slow and depend on Internet resources

2016-04-01 Thread Timo Paulssen
The profiler's data blob is a massive, gigantic blob of json (ls the 
file and you'll see).


You can easily search the urls to point at local files instead 
of the CDN.


Alternatively, there's a qt-based profiler up on tadzik's github that 
can read the json blob (you will have to 
--profile-filename=blahblah.json to get that), but it doesn't evaluate 
as much of the data - it'll potentially even fail completely what with 
the recent changes i made :S


The biggest contributor to filesize for the profiler is the complexity 
of the call tree. If you can cut out parts and pieces of your program, 
you should be able to profile them individually just fine.


In my experience, firefox is more likely to work with the big profiles.

If anybody is interested in improving our html/js profiler front-end, 
please do speak up and you'll be showered with as much guidance and 
praise as you need :)

  - Timo


Re: perl6 --profile-compile | --profile: both very slow and depend on Internet resources

2016-04-01 Thread Tom Browder
On Fri, Apr 1, 2016 at 9:28 AM, Tom Browder <tom.brow...@gmail.com> wrote:
> Is there any easy way to get the profilers to use local code (css, js,
> etc.) rather than reading across a sometimes slow internet connection?
>
> I'm using both Chrome and Iceweasel with the same effects: slow
> loading scripts and always seem to be reading:

Sorry, the slow script is:

  https://ajax.googleapis.com/aj…angularjs/1.4.2/angular.min.js:58
  ...
  https://ajax.googleapis.com/aj…angularjs/1.4.2/angular.min.js:179
  ...
  https://ajax.googleapis.com/aj…angularjs/1.4.2/angular.min.js:184
  ...
  https://ajax.googleapis.com/aj…angularjs/1.4.2/angular.min.js:78
  ...
  https://ajax.googleapis.com/aj…angularjs/1.4.2/angular.min.js:93
  ...

-Tom


perl6 --profile-compile | --profile: both very slow and depend on Internet resources

2016-04-01 Thread Tom Browder
Is there any easy way to get the profilers to use local code (css, js,
etc.) rather than reading across a sometimes slow internet connection?

I'm using both Chrome and Iceweasel with the same effects: slow
loading scripts and always seem to be reading:

https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css;>
https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css;>

Thanks.

Best regards,

-Tom


Re: [perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2016-02-03 Thread Bart Wiegmans
Yes, precompilation will help. Although we're not ready to make true or
fake executables yet, you can compile to the moarvm bytecode file:

perl6 --target=mbc --output=print.moarvm print.p6
But, this crashes during deserialization for me, so I'm not sure if that
can be made to work as it stands.
Regards,
Bart

2016-02-03 13:29 GMT+01:00 Tom Browder :

> On Thursday, January 21, 2016, Elizabeth Mattijsen  wrote:
>>
>> > On 21 Jan 2016, at 00:42, Alex Jakimenko (via RT) <
>> perl6-bugs-follo...@perl.org>
>>
> ...
>
>> As you can see, most of the time is spent parsing the file, and then
>> optimizing and then generating the bytecode.
>>
>> This won’t get any faster until we spend time on optimizing grammars in
>> NQP and/or MoarVM performance in general.
>
>
> Wouldn't the coming feature to compile the script into "foo.exe" help,
> even before other major optimization so?
>
> -Tom
>
>>


[perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2016-02-03 Thread Tom Browder
On Thursday, January 21, 2016, Elizabeth Mattijsen > wrote:
>
> > On 21 Jan 2016, at 00:42, Alex Jakimenko (via RT) <
> perl6-bugs-follo...@perl.org>
>
...

> As you can see, most of the time is spent parsing the file, and then
> optimizing and then generating the bytecode.
>
> This won’t get any faster until we spend time on optimizing grammars in
> NQP and/or MoarVM performance in general.


Wouldn't the coming feature to compile the script into "foo.exe" help, even
before other major optimization so?

-Tom

>


Re: [perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2016-01-21 Thread Elizabeth Mattijsen

> On 21 Jan 2016, at 00:42, Alex Jakimenko (via RT) 
> <perl6-bugs-follo...@perl.org> wrote:
> 
> # New Ticket Created by  Alex Jakimenko 
> # Please include the string:  [perl #127330]
> # in the subject line of all future correspondence about this issue. 
> # https://rt.perl.org/Ticket/Display.html?id=127330 >
> 
> 
> Create a file with 10_000 lines:
> 
> for x in {1..1}; do echo 'say ‘a’;' >> print.p6; done
> 
> And then time it:
> 
> time perl6 print.p6
> 
> It will take around 16 seconds to run.
> 
> You can also use 「print」 instead of 「say」, it does not matter.
> 
> The time grows linearly. I haven't done any serious benchmarks but please
> look at the attached graph, the data just speaks for itself.
> 
> Very important note (by Zoffix++):
> “It's all in compilation too.
> 17 seconds before it told me I got a syntax error.
> It takes 17s to run 10,000 prints on my box, but if I move them into a
> module and a sub and precompile the module, then I get 1.2s run. This is
> all compared to 0.2s run with Perl 5 on the same box”
> 
> Perhaps sub lookups are that slow?
> 
> Originally reported by zhmylove++.
> 

The delay is because of the parsing of the code:  using the —stagestats 
parameter, I get something like this on my machine:

$ time perl6 --stagestats print.p6 
Stage start  :   0.000
Stage parse  :   5.781
Stage syntaxcheck:   0.000
Stage ast:   0.000
Stage optimize   :   0.527
Stage mast   :   1.088
Stage mbc:   0.116
Stage moar   :   0.000
…
real0m7.694s
user0m7.546s
sys 0m0.194s

As you can see, most of the time is spent parsing the file, and then optimizing 
and then generating the bytecode.


This won’t get any faster until we spend time on optimizing grammars in NQP 
and/or MoarVM performance in general.



Liz

[perl #127330] [SLOW] 10_000 lines with 「say ‘a’;」 take 16 seconds to run

2016-01-20 Thread via RT
# New Ticket Created by  Alex Jakimenko 
# Please include the string:  [perl #127330]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=127330 >


Create a file with 10_000 lines:

for x in {1..1}; do echo 'say ‘a’;' >> print.p6; done

And then time it:

time perl6 print.p6

It will take around 16 seconds to run.

You can also use 「print」 instead of 「say」, it does not matter.

The time grows linearly. I haven't done any serious benchmarks but please
look at the attached graph, the data just speaks for itself.

Very important note (by Zoffix++):
“It's all in compilation too.
17 seconds before it told me I got a syntax error.
It takes 17s to run 10,000 prints on my box, but if I move them into a
module and a sub and precompile the module, then I get 1.2s run. This is
all compared to 0.2s run with Perl 5 on the same box”

Perhaps sub lookups are that slow?

Originally reported by zhmylove++.


Re: [perl #127064] Variable interpolation in regex very slow

2015-12-31 Thread Jules Field



On 29/12/2015 23:05, Timo Paulssen via RT wrote:

On 12/29/2015 12:46 AM, Jules Field (via RT) wrote:

# New Ticket Created by  Jules Field
# Please include the string:  [perl #127064]
# in the subject line of all future correspondence about this issue.
# https://rt.perl.org/Ticket/Display.html?id=127064 >


Given
my @lines = "some-text.txt".IO.lines;
my $s = 'Jules';
(some-text.txt is about 43k lines)

Doing
my @matching = @lines.grep(/ $s /);
is about 50 times slower than
my @matching = @lines.grep(/ Jules /);

And if $s happened to contain anything other than literals, so I had to us
my @matching = @lines.grep(/ <$s> /);
then it's nearly 150 times slower.

my @matching = @lines.grep($s);
doesn't appear to work. It matches 0 lines but doesn't die.

The lack of Perl5's straightforward variable interpolation in regexs is 
crippling the speed.
Is there a faster alternative? (other than EVAL to build the regex)


For now, you can use @lines.grep(*.contains($s)), which will be
sufficiently fast.

Ideally, our regex optimizer would turn this simple regex into a code
that uses .index to find a literal string and construct a match object
for that. Or even - if you put a literal "so" in front - turn it into
.contains($literal) if it knows that the match object will only be
inspected for true/false.

Until then, we ought to be able to make interpolation a bit faster.
   - Timo

Many thanks for that. I hadn't thought to use Whatever.

I would ideally also be doing case-insensitive regexps, but they are 50 
times slower than case-sensitive ones, even in trivial cases.
Maybe a :adverb for rx// that says "give me static (i.e. Perl5-style) 
interpolation in this regex"?
I can see the advantage of passing the variables to the regex engine, as 
then they can change over time.


But that's not something I want to do very often, far more frequently I 
just need to construct the regex at run-time and have it go as fast as 
possible.


Just thoughts from a big Perl5 user (e.g. MailScanner is 50k lines of it!).

Jules

--
ju...@jules.uk
Twitter: @JulesFM

'If I were a Brazilian without land or money or the means to feed
 my children, I would be burning the rain forest too.' - Sting


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [perl #127064] Variable interpolation in regex very slow

2015-12-29 Thread Timo Paulssen
On 12/29/2015 12:46 AM, Jules Field (via RT) wrote:
> # New Ticket Created by  Jules Field 
> # Please include the string:  [perl #127064]
> # in the subject line of all future correspondence about this issue. 
> # https://rt.perl.org/Ticket/Display.html?id=127064 >
>
>
> Given
>my @lines = "some-text.txt".IO.lines;
>my $s = 'Jules';
> (some-text.txt is about 43k lines)
>
> Doing
>my @matching = @lines.grep(/ $s /);
> is about 50 times slower than
>my @matching = @lines.grep(/ Jules /);
>
> And if $s happened to contain anything other than literals, so I had to us
>my @matching = @lines.grep(/ <$s> /);
> then it's nearly 150 times slower.
>
>my @matching = @lines.grep($s);
> doesn't appear to work. It matches 0 lines but doesn't die.
>
> The lack of Perl5's straightforward variable interpolation in regexs is 
> crippling the speed.
> Is there a faster alternative? (other than EVAL to build the regex)
>

For now, you can use @lines.grep(*.contains($s)), which will be
sufficiently fast.

Ideally, our regex optimizer would turn this simple regex into a code
that uses .index to find a literal string and construct a match object
for that. Or even - if you put a literal "so" in front - turn it into
.contains($literal) if it knows that the match object will only be
inspected for true/false.

Until then, we ought to be able to make interpolation a bit faster.
  - Timo


[perl #127064] Variable interpolation in regex very slow

2015-12-29 Thread via RT
# New Ticket Created by  Jules Field 
# Please include the string:  [perl #127064]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=127064 >


Given
   my @lines = "some-text.txt".IO.lines;
   my $s = 'Jules';
(some-text.txt is about 43k lines)

Doing
   my @matching = @lines.grep(/ $s /);
is about 50 times slower than
   my @matching = @lines.grep(/ Jules /);

And if $s happened to contain anything other than literals, so I had to us
   my @matching = @lines.grep(/ <$s> /);
then it's nearly 150 times slower.

   my @matching = @lines.grep($s);
doesn't appear to work. It matches 0 lines but doesn't die.

The lack of Perl5's straightforward variable interpolation in regexs is 
crippling the speed.
Is there a faster alternative? (other than EVAL to build the regex)

-- 
ju...@jules.uk


[perl #117403] [BUG] .first on an infinite list generated with gather take is excruciatingly slow

2015-08-29 Thread Will Coleda via RT
On Wed Mar 11 20:16:11 2015, Mouq wrote:
 On Fri Mar 29 06:37:47 2013, thunderg...@comcast.net wrote:
  my @j := (0 .. *).list;
  say @j.first: {$^_  10};
 
  returns 11 in .01 seconds on my system
 
  my @l := gather map {take $_ }, 0 .. *;
  say @l.first: {$^_  10};
 
  returns 11 in 63.97 seconds on my system
 
 FWIW, on my system with perl6 version 2015.02-275-g3da1bbd built on
 MoarVM version 2015.02-49-gb5b5435, I get about 8 seconds for:
 
 perl6 -e'my @l := gather map {take $_ }, 0 .. *; say @l.first: {$_ 
 10};'
 
 And about 1 second for:
 
 perl6 -e'my @j := (0 .. *).list; say @j.first: {$_  10}'
 
 I'm thinking this is less outrageous. This should probably be added to
 perl6-bench or something to ensure we don't regress. Once we have some
 kind of test in place, I think this ticket can be resolved.
 
 ~Mouq

on my OSX box on nom: real  0m4.393s, and real  0m0.532s

on glr, the first is now a syntax error:
second is slower with: real 0m1.643s
-- 
Will Coke Coleda


Re: [perl #125344] Int..Whatever ranges are slow (~20 times slower than Int..Int)

2015-06-07 Thread Elizabeth Mattijsen

 On 06 Jun 2015, at 21:24, Rob Hoelz (via RT) perl6-bugs-follo...@perl.org 
 wrote:
 
 # New Ticket Created by  Rob Hoelz 
 # Please include the string:  [perl #125344]
 # in the subject line of all future correspondence about this issue. 
 # URL: https://rt.perl.org/Ticket/Display.html?id=125344 
 
 
 Let's say I have an array where @array.end == $end.  Using @array[0..$end] is 
 about 20 times faster than @array[0..*].  I have attached an example script 
 that demonstrates this.bench.p6

If it’s the whole array that you want, have you considered using the zen slice? 
 That is still a lot faster than [0..$end].

Looking at why the * case is so slow.



Liz

[perl #125344] Int..Whatever ranges are slow (~20 times slower than Int..Int)

2015-06-06 Thread via RT
# New Ticket Created by  Rob Hoelz 
# Please include the string:  [perl #125344]
# in the subject line of all future correspondence about this issue. 
# URL: https://rt.perl.org/Ticket/Display.html?id=125344 


Let's say I have an array where @array.end == $end.  Using @array[0..$end] is 
about 20 times faster than @array[0..*].  I have attached an example script 
that demonstrates this.

bench.p6
Description: Binary data


[perl #120380] [BUG] Something is extremely slow with a look with string concat in Rakudo on the JVM

2014-10-17 Thread Christian Bartolomaeus via RT
To me it looks like the loop and the string concat are okay-ish (this is on a 
not too beefy machine (i3 3.33 GHz, 8 GiB RAM):

$ perl6-m -e 'my $file = ; for ^2 {$file ~= $_;};'

real0m0.500s
user0m0.436s
sys 0m0.060s

$ perl6-p -e 'my $file = ; for ^2 {$file ~= $_;};'

real0m1.717s
user0m1.588s
sys 0m0.116s

$ perl6-j -e 'my $file = ; for ^2 {$file ~= $_;};'

real0m10.312s
user0m19.917s
sys 0m0.352s

The smartmatch makes it somewhat slower (but much slower for parrot):

$ perl6-m -e 'my $file = ; for ^2 {$file ~= $_;}; $file ~~ /(\d+) +% 
;/;'

real0m1.210s
user0m1.140s
sys 0m0.064s

$ perl6-p -e 'my $file = ; for ^2 {$file ~= $_;}; $file ~~ /(\d+) +% 
;/;'

real0m12.436s
user0m12.141s
sys 0m0.240s

$ perl6-j -e 'my $file = ; for ^2 {$file ~= $_;}; $file ~~ /(\d+) +% 
;/;'

real0m13.147s
user0m25.122s
sys 0m0.428s

And now with say (STDOUT redirected to files, which have 20002 lines each) 
which makes it significantly slower:

$ perl6-m -e 'my $file = ; for ^2 {$file ~= $_;}; say $file ~~ /(\d+) 
+% ;/;'  foo1

real0m20.079s
user0m19.217s
sys 0m0.776s

$ perl6-p -e 'my $file = ; for ^2 {$file ~= $_;}; say $file ~~ /(\d+) 
+% ;/;'  foo2

real1m58.058s
user1m56.899s
sys 0m0.328s

$ perl6-j -e 'my $file = ; for ^2 {$file ~= $_;}; say $file ~~ /(\d+) 
+% ;/;'  foo3

real0m38.323s
user0m59.948s
sys 0m0.536s

All in all I'd say that's reasonable for the time being. (Especially the times 
with JVM are nowhere near 4 minutes.) If you disagree, please reopen the ticket.


[perl #120380] [BUG] Something is extremely slow with a look with string concat in Rakudo on the JVM

2013-10-27 Thread Carl Mäsak
# New Ticket Created by  Carl Mäsak 
# Please include the string:  [perl #120380]
# in the subject line of all future correspondence about this issue. 
# URL: https://rt.perl.org/Ticket/Display.html?id=120380 


pippo Hello. Does anybody tried this on rakudo-jvm? my $file = ;
for ^2 {$file ~= $_;}; say $file ~~ /(\d+) +% ';'/;
masak pippo: ...probably not?
FROGGS no, not exactly this line... why?
pippo It takes too much long to execute!
FROGGS pippo: how too muh long?
pippo Really I do not know. It blows me out before it finishes.
Niecza does it in few seconds.
masak pippo: that sounds like a bug, then.
pippo I do not know. I arrived at this trying to understand why
slurping a long file about 20'000 lines takes so long (lines slurp
myfile).
masak pippo: it would be interesting to see timings of this for p5
niecza r-p r-j
masak on your machine.
pippo I'll do...
pippo masak: Niecza: realI0m7.866s userI0m13.510s sysI0m0.135s (but
changed 20'000 to 10'000 in the for loop)
masak pippo: nice.
pippo masak: rakudo-jvm I am still waiting after 4 minutes
masak pippo: ok, submitting rakudobug.

Niecza's 7 s result feels reasonable. 4 minutes doesn't feel
reasonable. This ticket can be closed when things feel reasonable. :)


[perl #119865] [SLOW] Enough quantifiers in the declarative prefix in a regex takes exponential time in Rakudo

2013-09-17 Thread Carl Mäsak
# New Ticket Created by  Carl Mäsak 
# Please include the string:  [perl #119865]
# in the subject line of all future correspondence about this issue. 
# URL: https://rt.perl.org:443/rt3/Ticket/Display.html?id=119865 


masak r: my $N = 5; my $rx = a? x $N ~ a x $N; say a x $N ~~ /$rx/
camelia rakudo c0814a: OUTPUT«「a」␤␤»
masak r: my $N = 32; my $rx = a? x $N ~ a x $N; say a x $N ~~ /$rx/
tadzik moritz: oh yes, that'd work too
camelia rakudo c0814a: OUTPUT«(timeout)»
* masak submits rakudobug
* masak throws in http://swtch.com/~rsc/regexp/regexp1.html as a reference

As that page shows, this can be made into a polynomial thing rather
than an exponential one, by dealing directly with the NFA. NQP's regex
engine should be perfectly suited for this already.

Even disregarding that, there are various optimization tricks (à la
Perl 5) that can be done to make such a regex do less work.


[perl #72868] [BUG] Rakudo is very slow and resource-intensive in parsing '?' x 80

2010-02-16 Thread Carl Mäsak
# New Ticket Created by  Carl Mäsak 
# Please include the string:  [perl #72868]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=72868 


diakopter masak:
diakopter rakudo:

diakopter parse timeout
moritz_ (even the timeout feels quicker today :-)
masak diakopter: was that directed specifically at me? :)
diakopter oh oops, 'scuse me
diakopter masakbot: see above
masak diakopter: yes, but why?
masak all I see is you tormenting the implementation...
diakopter seems to me that's worthy of a bugreport
masak sure, as soon as I figure out what's wrong with it.
diakopter (*I'd* want to know about a parse timeout)
masak it's not so much a bug in Rakudo, is it?
diakopter yesbut
masak more of a bug in p6eval.
diakopter LOL
diakopter I think I see your point
diakopter unfortunately
diakopter .
masak I agree that it'd be nice to have the information in question
from p6eval.
masak timeouts are historically difficult to get right in the evalbot.
diakopter o wait
diakopter what
diakopter how could that be a bug in p6eval
masak p6eval waits X seconds to get a result back from Rakudo.
masak when it gets nothing, it reports 'no output'.
diakopter I'm confused
diakopter rakudo: ???
p6eval rakudo 65e2d3: OUTPUT«Confused at line 11, near ??␤ [...]
diakopter rakudo: 
p6eval rakudo 65e2d3: OUTPUT«Confused at line 11, near ??␤ [...]
diakopter rakudo: ?
p6eval rakudo 65e2d3: OUTPUT«Confused at line 11, near ??␤ [...]
masak I run your 'program' locally, and it gives me 'Confused'. yes,
like that.
diakopter rakudo: 
masak but throw enough confusion at it, and it won't have time to
report its confusion :)
p6eval rakudo 65e2d3:  ( no output )
diakopter how long does it take your local one to report Confused on
the long one above
* masak times it
* diakopter falls into the clockface
masak this feels quadratic to me :)
* diakopter sniffs
moritz_ so it's still in P :-)
masak add it to the FAQ: consider not writing 80 question marks one
after the other in your program.
masak moritz_: I might be wrong. maybe it's exponential, even.
diakopter std:

p6eval std 29742: OUTPUT«===SORRY!===␤Found ?? but
no !!; possible precedence problem [...]
masak STD++ # fast, honest
diakopter std:

p6eval std 29742: OUTPUT«===SORRY!===␤Found ?? but
no !!; possible precedence problem [...]
* diakopter claps
diakopter let me rephrase.
diakopter if I were the parser-generator's/interpreter's
author/maintainer, *I'd* want to know about seemingly exponential
behavior when parsing 80 question marks.
masak diakopter: waiting for Rakudo to finish parsing 80 question
marks, I tend to agree :)
* masak submits rakudobug

Note: I had to abort my local Rakudo 80-question-mark run after about
18 minutes, because my usually very trusty laptop was growing
dangerously sluggish and unresponsive, gobbling up ~1.5 Gb of memory.


Re: [perl #68762] very slow $*.IN reading

2009-08-25 Thread alexandre masselot

Thanks for the answer,

Therefore we'll keep eyes open when the speed improves a little bit.
Because all what Damian showed us last week was just astonishing!

But unfortunately, we have little problems here with less than  
thousands of input  in our domain of life science.


looking at the raduko web site lokks to be the place where to be aware  
of what is happening.


Thanks again for all the effor tyou put in

cheers
Alex

On Aug 24, 2009, at 7:00 PM, Moritz Lenz via RT wrote:


On Mon Aug 24 03:06:31 2009, alex_mass wrote:

so I made 4 files, with 1, 10, 100 ... 1  lines (text, approx 100
char per line)
code timings (with the command) for i in 1 10 100 1000 1; do echo
number of lines=$i; time ./easy-aligned-p5.pl /tmp/a-$i.txt; done

user time is reported, in seconds

nblines perl 5  perl 6  `wc -l`
1   .011.96 .001
10  .0051.0 .001
100 .0051.3 .000
1000.0054.4 .001
1   .01536  .002

Well, is there a problem here, or just something I have not  
understood


Rakudo is currently much slower (typically a factor 500) than Perl 5.
That's know and being worked on.


(I hoped for ($*IN.lines)-$line wa doing some lazy reading, no?)


It should, but it's not yet implemented.

Cheers,
Moritz


Go North! Go Wild! Fram!
http://www.framexpeditions.com





[perl #68762] very slow $*.IN reading

2009-08-24 Thread via RT
# New Ticket Created by  arcal0d 
# Please include the string:  [perl #68762]
# in the subject line of all future correspondence about this issue. 
# URL: http://rt.perl.org/rt3/Ticket/Display.html?id=68762 


Hello

totally enthusiastic after Damian talk on perl6 in Switzerland last  
week, I tried this morning, to make a little script.
However, the script was taking age just to count lines.

last rakudo download, on mac OS X

here are 2 code, one in perl6 and perl 5
rakudo
#!/usr/local/parrot_install/bin/perl6

my $n=0;
for  $*IN.lines-$line{
   $n++;
}
say n=$n;

and the old variant
#!/usr/bin/env perl
use strict;
use warnings;

my $n=0;
while (my $line=STDIN){
   $n++;
}
print n=$n\n;


so I made 4 files, with 1, 10, 100 ... 1  lines (text, approx 100  
char per line)
code timings (with the command) for i in 1 10 100 1000 1; do echo  
number of lines=$i; time ./easy-aligned-p5.pl /tmp/a-$i.txt; done

user time is reported, in seconds

nblines perl 5  perl 6  `wc -l`
1   .011.96 .001
10  .0051.0 .001
100 .0051.3 .000
1000.0054.4 .001
1   .01536  .002

Well, is there a problem here, or just something I have not understood  
(I hoped for ($*IN.lines)-$line wa doing some lazy reading, no?)

Tahnks for al the effort with rakudo which look totally great anyway!
Alex



Go North! Go Wild! Fram!
http://www.framexpeditions.com





A remark to P5P thread Why is Ruby on Rails so darn slow?

2008-04-27 Thread Leopold Toetsch
Hi,

there was some benchmarking and discussion about perl5 slowness in ...
http://groups.google.at/group/perl.perl5.porters/browse_thread/thread/4ad7f6edacc97e1b/75af8abf89420f8c?hl=de;
... and of course there were some remarks re parrot.

I'm not subscribed to p5p, so feel free to forward the message.

Here are the numbers[1] for that mentioned benchmark[2]:

$ time perl mandel.pl /dev/null  time ./parrot -j mandel.pir /dev/null  
time ./mandel /dev/null

real0m3.314s
user0m3.300s
sys 0m0.004s

real0m0.084s
user0m0.084s
sys 0m0.000s

real0m0.029s
user0m0.024s
sys 0m0.000s

leo

[1] on AMD [EMAIL PROTECTED], optimized parrot build of r27216 (-m32 --optimize)
[2] perl and C code from 
http://www.timestretch.com/FractalBenchmark.html#4d9e0f15fc1b42420cf3f778c3cccad7
perl 5.8.8, C 4.1.0 (C compiled w/ -O3), optimized parrot build of r27216 
(-m32 --optimize)

parrot code is here:

$ cat mandel.pir
.const int BAILOUT=   16
.const int MAX_ITERATIONS = 1000
.sub mandelbrot
.param num x
.param num y
.local num cr, ci, zr, zi
.local int i
cr = y - 0.5
ci = x
zi = 0.0
zr = 0.0
i  = 0
loop:
inc i
.local num temp, zr2, zi2
temp = zr * zi
zr2  = zr * zr
zi2  = zi * zi
zr   = zr2 - zi2
zr  += cr
zi   = temp + temp
zi  += ci
$N0  = zi2 + zr2
if $N0  BAILOUT goto ret
if iMAX_ITERATIONS goto ret0
goto loop
ret:
.return (i)
ret0:
.return (0)
.end
.sub main :main
.local num start, end
start = time
.local int x, y
y = -39
loopy:
if y = 39 goto endy
print \n
x = -39
loopx:
if x = 39 goto endx
.local int i
$N0 = x / 40
$N1 = y / 40
i = mandelbrot($N0, $N1)
if i == 0 goto pstar
print ' '
goto pdone
pstar:
print '*'
pdone:
inc x
goto loopx
endx:
inc y
goto loopy
endy:
end = time
$N2 = end - start
$P0 = new 'ResizablePMCArray'
push $P0, $N2
$S0 = sprintf \nParrot Elapsed %0.2f\n, $P0
print $S0
.end


parrotbug - slow or size limits?

2005-09-06 Thread Joshua Hoblitt
I submitted a patch to parrotbug about 4 hours ago and it hasn't shown
up in RT yet nor have I received the auto-response email.  The patch was
rather large so I'm wondering if I haven't hit some sort of message size
limit.  Is this a normal lag or should I submit the patch directly
through RT?

Sep  5 12:27:48 [postfix/smtp] DD0533FBB1: to=[EMAIL PROTECTED], 
relay=mx.develooper.com[63.251.223.176], delay=46, status=sent (250 Queued! 
1125959107 qp 3628 [EMAIL PROTECTED])

-J

--


pgpKTjqeFZcCO.pgp
Description: PGP signature


RE: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)

2005-07-05 Thread Paul Marquess
From: Yitzchak Scott-Thoennes [mailto:[EMAIL PROTECTED]

 On Mon, Jul 04, 2005 at 02:19:16PM +0100, Paul Marquess wrote:
  Whilst I'm here, when I do get around to posting a beta on CPAN, I'd
 prefer
  it doesn't get used in anger until it has bedded-in. If I give the
 module a
  version number like 2.000_00, will the CPAN shell ignore it?
 
 This is often done incorrectly. See Lperlmodstyle/Version numbering
 for the correct WTDI:
 
$VERSION = 2.000_00;# let EU::MM and co. see the _
$XS_VERSION = $VERSION;   # XS_VERSION has to be an actual string
$VERSION = eval $VERSION; # but VERSION has to be a number
 
 Just doing $VERSION = 2.000_00 doesn't get the _ into the actual
 distribution version, and just doing $VERSION = 2.000_00 makes
 
use Compress::Zlib 1.0;
 
 give a warning (because it does: 1.0 = 2.000_00 internally, and _
 doesn't work in numified strings).
 
 But if you are doing a beta leading up to a 2.000 release, it should be
 numbered  2.000, e.g. 1.990_01.  Nothing wrong with a 2.000_01 beta
 in preparation for a release 2.010 or whatever, though.

Thanks for the comprehensive answer folks. Much appreciated. 

Paul





___ 
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail 
http://uk.messenger.yahoo.com



what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wi ld?)

2005-07-04 Thread Konovalov, Vadim
 I've just been through the should-I-shouldn't-I-support-5.4 with my
 (painfully slow) rewrite of Compress::Zlib. In the end I 

...

I always thought that Compress::Zlib is just a wrapper around zlib which in
turn is C and developed elsewhere (and in stable state for a long time now).

What is (painfully slow) rewrite?


Re: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wi ld?)

2005-07-04 Thread David Landgren

Konovalov, Vadim wrote:

I've just been through the should-I-shouldn't-I-support-5.4 with my
(painfully slow) rewrite of Compress::Zlib. In the end I 



...

I always thought that Compress::Zlib is just a wrapper around zlib which in
turn is C and developed elsewhere (and in stable state for a long time now).

What is (painfully slow) rewrite?


I think Paul means that it is taking him a long time to write the code, 
not that the code itself is slow.


David





RE: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)

2005-07-04 Thread Paul Marquess
From: Konovalov, Vadim [mailto:[EMAIL PROTECTED]
 
  I've just been through the should-I-shouldn't-I-support-5.4 with my
  (painfully slow) rewrite of Compress::Zlib. In the end I
 
 ...
 
 I always thought that Compress::Zlib is just a wrapper around zlib which
 in
 turn is C and developed elsewhere (and in stable state for a long time
 now).

Yes, that is mostly true, but there have been a few changes made to zlib of
late that I want to make available in my module (the ability to append to
existing gzip/deflate streams being one). Plus I had a list of new
features/enhancements I wanted to add that have been sitting on a TODO list
for ages.

The top issue in my mailbox for Compress::Zlib is the portability of the
zlib gzopen/read/write interface. I've now completely removed all
dependencies on the zlib gzopen code and written the equivalent of that
interface in Perl. A side-effect of that decision is that I now have
complete read/write access to the gzip headers fields.

Another reason is provide better support for HTTP content encoding. I can
now autodetect and uncompress any of the three zlib-related compression
formats used in HTTP content-encoding, i.e. RFC1950/1/2 

etc, etc...

 What is (painfully slow) rewrite?

I don't have as much time to dabble these days, so I've been working at it
on and (mostly) off for at least a year. 

Whilst I'm here, when I do get around to posting a beta on CPAN, I'd prefer
it doesn't get used in anger until it has bedded-in. If I give the module a
version number like 2.000_00, will the CPAN shell ignore it?

Paul



___ 
How much free photo storage do you get? Store your holiday 
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com



RE: what slow could be in Compress::Zlib? (was RE: 5.004_xx in th e wi ld?)

2005-07-04 Thread Konovalov, Vadim
   What is (painfully slow) rewrite?
  
  I think Paul means that it is taking him a long time to 
 write the code,
  not that the code itself is slow.
 
 Correct. Looks like I answered the wrong question :-)

Indeed I understood incorrectly first time, but you shed quite many light on
other important aspects of Compress::Zlib (which is in now the Perl core, as
everyone already know :)

Thanks!


RE: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wi ld?)

2005-07-04 Thread Paul Marquess
From: David Landgren [mailto:[EMAIL PROTECTED]


 Konovalov, Vadim wrote:
 I've just been through the should-I-shouldn't-I-support-5.4 with my
 (painfully slow) rewrite of Compress::Zlib. In the end I
 
 
  ...
 
  I always thought that Compress::Zlib is just a wrapper around zlib which
 in
  turn is C and developed elsewhere (and in stable state for a long time
 now).
 
  What is (painfully slow) rewrite?
 
 I think Paul means that it is taking him a long time to write the code,
 not that the code itself is slow.

Correct. Looks like I answered the wrong question :-)

Paul





___ 
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail 
http://uk.messenger.yahoo.com



Re: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)

2005-07-04 Thread Sébastien Aperghis-Tramoni

Paul Marquess wrote:

Whilst I'm here, when I do get around to posting a beta on CPAN, I'd 
prefer
it doesn't get used in anger until it has bedded-in. If I give the 
module a

version number like 2.000_00, will the CPAN shell ignore it?


Indeed, if a distribution is numbered with such a number, it is not 
indexed by PAUSE, and therefore can't be installed from CPAN/CPANPLUS



Sébastien Aperghis-Tramoni
 -- - --- -- - -- - --- -- - --- -- - --[ http://maddingue.org ]
Close the world, txEn eht nepO



Re: what slow could be in Compress::Zlib?

2005-07-04 Thread Andreas J. Koenig
 On Mon, 4 Jul 2005 14:19:16 +0100, Paul Marquess [EMAIL PROTECTED] 
 said:

   If I give the module a version number like 2.000_00, will the CPAN
   shell ignore it?

Yes. To be precice, the indexer on PAUSE will ignore it. But don't
forget to write it with quotes around.

-- 
andreas


Re: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)

2005-07-04 Thread Yitzchak Scott-Thoennes
On Mon, Jul 04, 2005 at 02:19:16PM +0100, Paul Marquess wrote:
 Whilst I'm here, when I do get around to posting a beta on CPAN, I'd prefer
 it doesn't get used in anger until it has bedded-in. If I give the module a
 version number like 2.000_00, will the CPAN shell ignore it?

This is often done incorrectly. See Lperlmodstyle/Version numbering
for the correct WTDI:

   $VERSION = 2.000_00;# let EU::MM and co. see the _
   $XS_VERSION = $VERSION;   # XS_VERSION has to be an actual string
   $VERSION = eval $VERSION; # but VERSION has to be a number
   
Just doing $VERSION = 2.000_00 doesn't get the _ into the actual
distribution version, and just doing $VERSION = 2.000_00 makes

   use Compress::Zlib 1.0;

give a warning (because it does: 1.0 = 2.000_00 internally, and _
doesn't work in numified strings).

But if you are doing a beta leading up to a 2.000 release, it should be
numbered  2.000, e.g. 1.990_01.  Nothing wrong with a 2.000_01 beta
in preparation for a release 2.010 or whatever, though.


Re: Why is the fib benchmark still slow - part 1

2004-11-06 Thread Leopold Toetsch
Dan Sugalski [EMAIL PROTECTED] wrote:

 The current calling scheme stands.

[ ... ]

 No more performance changes. Period. We get parrot fully functional first.

Ok. As long as PIR code is concerned call internals are nicely hidden by
the function call and return syntax. But PASM could be a problem if
there is a growing amount of code around.
We have to change calling conventions slightly for more speed
eventually.

leo


Re: Why is the fib benchmark still slow - part 1

2004-11-06 Thread Dan Sugalski
At 9:38 AM +0100 11/6/04, Leopold Toetsch wrote:
Dan Sugalski [EMAIL PROTECTED] wrote:
 The current calling scheme stands.
[ ... ]
 No more performance changes. Period. We get parrot fully functional first.
Ok. As long as PIR code is concerned call internals are nicely hidden by
the function call and return syntax. But PASM could be a problem if
there is a growing amount of code around.
We have to change calling conventions slightly for more speed
eventually.
We can lift the requirement that registers be saved for calls. Beyond 
that, the calling conventions are fixed.
--
Dan

--it's like this---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk


Why is the fib benchmark still slow - part 1

2004-11-05 Thread Leopold Toetsch
Below (inline/attached) is a longish analysis of fib benchmark timings.
leo
Why is the fib benchmark still slow - part 1

Python runs the fib test roughly twice as fast as Parrot. I don't
like that ;) So what's the problem?

1) CPU cache issues

First, if you like to investigate the issue yourself (and run
i386/linux or similar, and you don't have it) get valgrind. Great
tool. It contains a plugin called cachegrind which shows detailed
information about executed inctructions including cache misses.

For parrot -j -Oc examples/benchmarks/fib.imc[1] (and perl, python ...)
we get:

 IRefs DRrefsTime   L2 Misses
Perl 5.8.0   2.0e9  1.2e92.5s   7.600
Perl 5.8.0-thr   2.2e9  1.4e93.0s   8.500
Python 2.3.3 1.2e9  0.7e91.4s  54.000
Parrot IMS   1.1e9  0.7e93.3s   7.000.000
Parrot  MS   1.7e9  0.7e92.6s   4.800.000
Lua 5.0 [2]  0.7e9  0.4e90.85 !!!   4.000

IRefs ... instructions executed
DRefs ... data read + write
IMS   ... Parrot with incremental MS (settings.h), -O3 compiled
MS... Parrot CVS with stop-the-world MS, -O3

DRefs are boring - except that Perl5 takes 50% more. IRefs are quite
interesting, IMS has fewest, even less then Python. But benchmark
timings are differing totally. IMS is slower then MS and both are much
slower then Python.

The reason is in the last columns. The valgrind manual states (and
above numbers second it) that:
*one Level 2 cache miss takes roughly the time of 200 instructions*.

These 7 Mio cache misses account for ~1.4e9 IRefs, which more then
doubles the exucution time. Timings might be better on a fat Xeon CPU
but that doesn't really matter (above is w. AMD 800, 256 KB L2 cache)

The cache misses are basically in two places

a) accessing a new register frame and context
b) during DOD/GC

We have to address both areas to get rid of the majority of cache
misses.

ad a)

For each level of recursion, we are allocating a new context structure
and a new register frame. Half of these is coming from the recently
implemented return continuation and register frame chaches. The other
half has to be freshly allocated. We get exactly for every second
function call L2 cache misses for both the context and the register
structure.

We can't do much against the cache misses in the context, just make it
as small as possible: e.g by moving the now unused old register stack
pointers (4 words) out of the context or toss these 4 pointers
entirely.

But we can address the problem with the register frame, by --well
making it smaller. Python is running the code on the stack. It's
touching only SP(0) and SP(1) mostly.

Register usage analysis of fib shows that it could run on Parrot with
just *one* persistent register per kind.

More about that is coming and was already said: sliding register
window.

ad b)

The (currently broken) Parrot setting ARENA_DOD_FLAGS shows one
possibility to reduce cache misses in DOD. During a sweep (which runs
through all allocated object memory) the memory itself isn't touched,
just a nibble per object is used, which holds the relevant information
like is_live.

A second way to address DOD issues it to make the PMCs variable sized.
I've proposed that already not too long ago.

Third and independent of these is not to touch most of the memory at
all by implementing a generatioal garbage collector. After an object
has survived a few GC cycles, its moved into an older generation,
which isn't subject to a GC sweep.

Both the make PMCs variable sized and the generatioal GC need very
likely another indirection to access a PMC. But by avoiding just one
cache miss, you can do 200 of such indirect accesses.

Thanks for reading til here  comments welcome,
leo

[1] fib.imc is using integers, which is already an optimization. But
that's not the point here.
[2] valgrind --skin=cachegrind /opt/src/lua-5.0/bin/lua
 /opt/src/lua-5.0/test/fib.lua  28
and that's even comparing a memoized fib with plain too.


Re: Why is the fib benchmark still slow - part 1

2004-11-05 Thread Leopold Toetsch
Miroslav Silovic wrote:
[EMAIL PROTECTED] wrote:
a) accessing a new register frame and context
b) during DOD/GC
Or it would make sense to use multi-frame register chunks. I kept 
locality of access in mind but somehow never spelled it out. But I 
*think* I mentioned 64kb as a good chunk size precisely because it fits 
well into the CPU cache - without ever specifying this as the reason.
Yep that's the idea, as originally proposed The one frame per chunk was 
the intermediate step. And I'm thinking of 64 Kb too. The Parrot 
register structure is 640 bytes on a 32bit system w. 8byte double. With 
that size we have worst-case 1% overhead for allocating a new chunk.
Or, 100 levels nesting are w/o any allocation.

Anyway, if you can pop both register frames -and- context structures, 
you won't run GC too often, and everything will nicely fit into the 
cache.  
I thought about that too, but I'd rather have registers adjacent, so 
that the caller can place function arguments directly into the callers 
in arguments.

OTOH it doesn't really matter, if the context structure is in the frame 
too. We'd just need to skip that gap. REG_INT(64) or I64 is as valid as 
I0 or I4, as long as it's assured, that it's exactly addressing the 
incoming argument area of the called function.

... Is the context structure a PMC now (and does it have to be, if 
the code doesn't specifically request access to it?)
The context structure is inside the interpreter structure. When doing a 
function call, its a malloced copy of the caller's state, hanging off 
the continuation PMC.

ad b)

Is there a way to find out how many misses came out from DoD, compared 
to register frames allocation?
Sure. Cachegrind is showing you the line in C source code ;) With 
incremental MS we have (top n, rounded):

L2 write misses (all in context and registers)
500.000 Parrot_Sub_invoke  touch interpreter-ctx.bp
500.000   -- touch registers e.g. REG_PMC(0) = foo
200.000 copy_registers   --
700.000  ???:???   very likely JIT code writing regs first
600.000 malloc.c:chunk_alloc
L2 read misses (DOD)
1.300.000 Parrot_dod_sweep
1.300.000 contained_in_pool
  600.000 get_free_object free_list handling
plus 500.000 more in __libc_free.
I believe that you shouldn't litter (i.e. create an immediately GCable 
object) on each function call - at least not without generational 
collector specifically optimised to work with this.
The problem isn't the object creation per se, but the sweep through the 
*whole object memory* to detect dead objects. It's of course true, that 
we don't need the return continuation PMC for the fib benchmark. But a 
HLL translated fib would use Integer PMCs for calculations.

...  This would entail 
the first generation that fits into the CPU cache and copying out live 
objects from it. And this means copying GC for Parrot, something that 
(IMHO) would be highly nontrivial to retrofit.
A copying GC isn't really hard to implement. And it has the additional 
benefit of providing better cache locality. Nontrivial to retrofit or 
not, we need a generational GC.

   Miro
leo


Re: Why is the fib benchmark still slow - part 1

2004-11-05 Thread Miroslav Silovic
[EMAIL PROTECTED] wrote:
a) accessing a new register frame and context
b) during DOD/GC
We have to address both areas to get rid of the majority of cache
misses.
ad a)
For each level of recursion, we are allocating a new context structure
and a new register frame. Half of these is coming from the recently
implemented return continuation and register frame chaches. The other
half has to be freshly allocated. We get exactly for every second
function call L2 cache misses for both the context and the register
structure.
 

Or it would make sense to use multi-frame register chunks. I kept 
locality of access in mind but somehow never spelled it out. But I 
*think* I mentioned 64kb as a good chunk size precisely because it fits 
well into the CPU cache - without ever specifying this as the reason.

Anyway, if you can pop both register frames -and- context structures, 
you won't run GC too often, and everything will nicely fit into the 
cache.  Is the context structure a PMC now (and does it have to be, if 
the code doesn't specifically request access to it?)

ad b)
The (currently broken) Parrot setting ARENA_DOD_FLAGS shows one
possibility to reduce cache misses in DOD. During a sweep (which runs
through all allocated object memory) the memory itself isn't touched,
just a nibble per object is used, which holds the relevant information
like is_live.
 

Is there a way to find out how many misses came out from DoD, compared 
to register frames allocation?

I believe that you shouldn't litter (i.e. create an immediately GCable 
object) on each function call - at least not without generational 
collector specifically optimised to work with this. This would entail 
the first generation that fits into the CPU cache and copying out live 
objects from it. And this means copying GC for Parrot, something that 
(IMHO) would be highly nontrivial to retrofit.

   Miro


  1   2   >