Incremental counter is fine, we don't need a milliseconds time - although that may change when we start going to encounter some different use cases (clustering etc) - but for now its fine.

Our conflict strategy is very simply. Each Salience value has its own LinkedList queue. So if I have 5 rules each with a salience of 25 then there will be 5 fact[] on the Salience's queue. Further to this we have a priority queue; when a salience queue has one or more values it is placed onto the priority queue. This is how we achieve salience+lifo and still allow activations to remove themselves without scanning.

Lazy sorting is essential for speed, as most activations do not fire, hence the need for a priority queue - but its also essential we have the linked lists to avoid scanning when removing activations. If we want to have more flexible conflict resolution strategies, that includes recency then we will need to develop a specialised priority queue where the nodes can remove themselves without scanning - and one that prefferably has a minimal overhead for node removal. If we achieve this we can remove the need to have a salience linkedlist queue in a priorityqueue and just go straight with a priority queue for all activations again - like we had in 2.x.

Mark
PS Please make sure you reply include the dev list.
Edson Tirelli wrote:
   They are not timestamps mic, only an incremental counter.
    ----- Original Message -----
    *From:* Michael Neale <mailto:[EMAIL PROTECTED]>
    *To:* Edson Tirelli <mailto:[EMAIL PROTECTED]> ; Mark
    Proctor <mailto:[EMAIL PROTECTED]> ; Peter Lin
    <mailto:[EMAIL PROTECTED]>
    *Sent:* Saturday, April 08, 2006 12:05 AM
    *Subject:* RE: Waltz

    How accurate do the timestamps have to be?
    it this something we can switch in and out? As it sounds like
    normally it would be overkill for us?

    ------------------------------------------------------------------------
    *From:* Edson Tirelli [mailto:[EMAIL PROTECTED]
    *Sent:* Sat 8/04/2006 10:04 AM
    *To:* Michael Neale; Mark Proctor; Peter Lin
    *Subject:* Re: Waltz

       I already made the change to calculate the recency for the
    tuples, but the problem is really in the agenda when it needs to
    chose the rule to fire based on this recency. Mark thinks using a
    ordered set/map or whatever is costly. So we need to create a
    solution that uses reference and is not so costly.
[]s
       Edson
        ----- Original Message -----
        *From:* Michael Neale <mailto:[EMAIL PROTECTED]>
        *To:* Edson Tirelli <mailto:[EMAIL PROTECTED]> ;
        Mark Proctor <mailto:[EMAIL PROTECTED]> ; Peter Lin
        <mailto:[EMAIL PROTECTED]>
        *Sent:* Friday, April 07, 2006 8:08 PM
        *Subject:* RE: Waltz

        Is the 2. modification to recency a big modification for us? I
        don't really know how out lifo works, I can only assume it is
        at the fact level, so is it hard to sum it up for tuple recency?
Michael.

        ------------------------------------------------------------------------
        *From:* Edson Tirelli [mailto:[EMAIL PROTECTED]
        *Sent:* Sat 8/04/2006 7:30 AM
        *To:* Mark Proctor; Michael Neale; Peter Lin
        *Subject:* Waltz


            All,

            I finished today both, the porting of Waltz to jess and the
        corrections on our DRL version of the rules. I believe our DRL
        is now
        correct. Was not able to find any problem.
            Although, our results are different from jess. In the
        *simplest*
        case (18 lines), I think both our answer is correct and jess'
        answer is
        correct, even being different answers. But, when I tried with
        larger
        data bases, our answer does not seems to be correct as it
        plots almost
        all edges as "Boundary" edges.

            The reason for this is that we trigger rules in a
        different sequence
        from jess. From what I could gather and looking at some
        classes inside
        jess, the criteria it uses for triggering rules is:

        1. Salience
        2. Recency, where tuple recency it uses is the sum of all facts
        recencies inside the tuple
        3. was not able to determine. Seems to be a not deterministic
        criteria.

            This makes impossible for us to compare performance, as we are
        clearly following different execution paths compared to jess.

            What are the next steps on this? Should we implement the
        second
        criteria above?
            Unless I'm missing something here, the test is extremelly data
        sensitive and seems to be designed to work with the resolution
        criterias
        jess/clips use.

            Thoughts?
                Edson

          ---
          Edson Tirelli
          Auster Solutions do Brasil
          @ www.auster.com.br
          +55 11 5096-2277 / +55 11 9218-4151



        ------------------------------------------------------------------------
        No virus found in this incoming message.
        Checked by AVG Free Edition.
        Version: 7.1.385 / Virus Database: 268.4.0/304 - Release Date:
        7/4/2006

    ------------------------------------------------------------------------
    No virus found in this incoming message.
    Checked by AVG Free Edition.
    Version: 7.1.385 / Virus Database: 268.4.0/304 - Release Date:
    7/4/2006


Reply via email to