Hi, Alex.

>
>I haven't tried the code, or analyzed it in detail. But indeed I would
>also expect that the time should increase linearly.
>
Right.  I have played with the code a little (and used bench) and it
seems to point to IO being the issue.

Here is the timing info

: (bench (main 10)) 
  0.009 sec
: (bench (main 100)) 
  0.100 sec
: (bench (main 1000)) 
  0.663 sec
: (bench (main 10000)) 
  6.540 sec

When I replace 

  (prinl (make-nametring-code))

with the same work, but throwing it away without IO

  (prin '.)(setq tmp '(make-namestring-code))

Then it becomes almost instantaneous

: (bench (altmain 10)) 
  0.000 sec
: (bench (altmain 100)) 
  0.000 sec
: (bench (altmain 1000)) 
  0.000 sec
: (bench (altmain 10000)) 
  0.001 sec

I am less surprised by this than how Matz and the Ruby-guys got their
code to be constant.  As I mentioned earlier, this is purely an
observation, not a criticism.  I am enjoying plisp very much.

>
>The amount of data (= length of the lists) is the same in each test,
>right? Because, if the lists were longer, accessing them with 'get'
>would cause an exponential increase.
>
Yup.  They are constant.  One slurp for each file into lists.  Then
random access of the lists n-times.

>You might consider some measurements to find out what causes this
>behavior. Especially, take a look at the profiling library in
>"@lib/prof.l". Or simply use 'bench'.
>
I had a quick look at prof.l but it is not clear to me how to use
(profile).  Can you give a quick example?

Many thanks.

 .. mark.

-- 
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to