You aren't using the conditions "within one hour" here, but other than this, it should work well, even with the customer identity included. -W
2010/10/19 Tim 4076 <[email protected]>: > What do you guys think of this: my facts are all the same type, so I didn't > have anything like a consumer object to break things down with. But what I > did was: > > rule "group 1" > when > $a : Trans( $groupValue : prodCat, $date : date ) > not Trans( this != $a, prodCat == $groupValue, date < $date ) > $b : LinkedList( size >= 1 ) from collect ( Trans( this != $a, prodCat == > $groupValue, date > $date ) ) > then > //do something > end > > This is quite quick. The rule grabs all the Trans objects that have the same > prodCat, in a single firing, without the need to iteractively retract things > or fire multiple times for the same group value. As the first condition only > matches a single fact (the one with the oldest date), there is only ever a > single permutation that can fulfill the conditions. > > This is only aggregating on a single attribute, but the principal should > work with more. > > > On 18 October 2010 16:12, Wolfgang Laun <[email protected]> wrote: >> >> Doing s.th. like >> $t1 : Trans( $id : id, $pc : pc, $tt : tt ) >> $t2 : Trans( this != $t1, id == $id, pc == $pc, tt < ($tt + 3600) ) >> is bound to produce poor performanc. >> >> Divide and conquer! >> >> You might start with a Consumer record >> Consumer( $id : id ) >> $t1 : Trans( id == $id: , $pc : pc, $tt : tt ) >> $t2 : Trans( this != $t1, id == $id, pc == $pc, tt < ($tt + 3600) ) >> >> You might run an (external) sort on the Transaction records and >> process it in batches of identical id+pc. >> >> If transaction times don't go around the clock, you might sort by >> date, and process day by day. >> >> You may have to create a Domain Specific Language for the >> non-programmers, putting a firm rein on how they combine the basic >> facts. Processing large batches is bound to require skills they just >> dont have. >> >> -W >> >> >> On 18 October 2010 16:14, Greg Barton <[email protected]> wrote: >> > It would be nice if we had an example of some rules. That way we can >> > rule out obvious performance killers like cartesian products and multiple >> > "from" clauses in one rule. >> > >> > GreG >> > >> > On Oct 18, 2010, at 5:19, Tim 4076 <[email protected]> wrote: >> > >> > Hi, >> > I'm trying to use drools to do grouping of data according to patterns >> > defined in my rules, but I'm having issues creating something that works in >> > a reasonable amount of time (seconds). I've tried all sorts of permutations >> > without much luck and would like to hear how others would do the same >> > thing. >> > >> > To give an example: I've got a big batch of transaction records and I >> > want to aggregate all the records where the consumer id and product >> > category >> > are the same and the purchases were made within an hour of each other. >> > >> > The fact that its matching the same values between facts, rather than >> > against constants seems to scupper it somewhat. >> > >> > I would go down the ETL route, but the idea is for non-techies to define >> > their own aggregations using rules. >> > >> > -Cheers. Tim >> > >> > >> > _______________________________________________ >> > rules-users mailing list >> > [email protected] >> > https://lists.jboss.org/mailman/listinfo/rules-users >> > >> > >> > >> > >> > _______________________________________________ >> > rules-users mailing list >> > [email protected] >> > https://lists.jboss.org/mailman/listinfo/rules-users >> > >> >> _______________________________________________ >> rules-users mailing list >> [email protected] >> https://lists.jboss.org/mailman/listinfo/rules-users > > > _______________________________________________ > rules-users mailing list > [email protected] > https://lists.jboss.org/mailman/listinfo/rules-users > > _______________________________________________ rules-users mailing list [email protected] https://lists.jboss.org/mailman/listinfo/rules-users
