By way of followup, I coded a routine for this benchmark that uses
functions only, with the data stored in a block initialized to

    [ _keys [] _vals [] ]

That block is passed as an argument to the functions that access or
modify keys and values; the functions access the two sections of the
block via paths.  (Source code attached, if anyone cares to critique.)

The ratios for a small collection of tests are:

   Key   Nbr of     Test  In-line   Object  Functns  Functns
 Space  Testing  Harness  Block     Method   w/ Obj  w/ Path
  Size   Cycles     Loop  Version  Version  Storage  Storage
------  -------  -------  -------  -------  -------  -------
10,000    5,000    0.00%  100.00%  177.50%  175.00%  177.50%
 1,000   50,000    1.92%  100.00%  183.65%  190.38%  191.35%
10,000   50,000    0.22%  100.00%  187.96%  192.34%  192.56%
   100  500,000   10.31%  100.00%  214.95%  234.54%  243.81%

Another surprise!  It now appears to me that function entry and
exit (whether object methods or "bare" functions) are the driving
factor in the timings.

My current conclusion is that REBOL objects are essentially tied
with plain function calls, in terms of the cost/benefit tradeoff
of runtime vs. readability and economy of expression.  I think
(unless one is using such a large number of objects that the
duplication overhead is a factor) that the benefits of namespace
management and potential improvements in reusability give objects
the edge.

-jn-

[EMAIL PROTECTED] wrote:
> 
[snip]
>
>       Relative
> Loop    time    Comments
> ====  ========  ====================================================
>   1      0.66%  We can "do nothing" really fast! ;-)  This means
>                 that the test harness didn't distort the comparisons.
> 
>   2    100.00%  The block version.
> 
>   3    174.17%  So the cost of key-value separation, keeping all the
>                 methods out of the global namespace, etc. is an
>                 overhead of about 75% -- not intolerable for most
>                 purposes.
> 
>   4    178.15%  This result makes me suspect that accessing the
>                 components of an object is the operation that's
>                 driving the runtime cost.
> ====  ========  ====================================================
> 
> More testing (a functional version that handles key-value separation
> without using an object for data storage) is clearly in order, but I
> thought I'd pass on those preliminary results.
>
REBOL [
    Title: "Minimal Associative Data Store - Function Version 2"
    File:  %adsf2.r
    Date:  20-Sep-2000
    Author: "Joel Neely"
    Purpose: {
        Function-based implementation of associative data store,
        using object for data only, with arbitrary data types
        for keys and values.
    }
]

adsf2.block: func [] [
    copy/deep [
        _keys []
        _vals []
    ]
]

adsf2.clear: func [ads [block!]] [
    ads/_keys: copy []
    ads/_vals: copy []
    ads
]

adsf2.empty?: func [ads [block!]] [system/words/empty? ads/_keys]

adsf2.length?: func [ads [block!]] [system/words/length? ads/_keys]

adsf2.new: func [] [adsf2.block]

adsf2.put: func [ads [block!] k [any-type!] v [any-type!] /local p] [
    either found? p: find/only ads/_keys k [
            change/only at ads/_vals index? p v
    ][
        append/only ads/_keys k
        append/only ads/_vals v
    ]
    v
]

adsf2.get: func [ads [block!] k [any-type!] /local p] [
    either none? p: find/only ads/_keys k [
        none
    ][
        pick ads/_vals index? p
    ]
]

adsf2.delete: func [ads [block!] k [any-type!] /local p v] [
    either none? p: find/only ads/_keys k [
        none
    ][
        k: pick   ads/_vals p: index? p
        remove at ads/_keys p
        remove at ads/_vals p
        k
    ]
]

adsf2.keys: func [ads [block!]] [copy ads/_keys]

Reply via email to