Henrik, thanks for sharing. Interesting question!

> I can fairly trivially do a collect and then a filter on the results of the
> collect as shown above.
> But how would the above problem be solved with Pilog and select if we have
> more than "a couple of hundred objects" in the database?

For my own curiosity, should Pilog perform much better than a
collect/filter? I would think that collect/filter should be nearly
instant on upwards of a few thousand items (untested). When the data
starts to get massive, indexes could theoretically help. Working
through the logic, it seems like the various indexes would have to be
hit a few times at least


   (Started (collect 'sDate '+Proj Sdate)  # find project started
after the S date
    Ended (collect 'eDate '+Proj "1900-01-01" Edate) # find projects
ended before the end date
    StartedBefore (collect 'sDate '+Proj "1900-01-01" Sdate) #
projects with start date before range
    EndedAfter (collect 'eDate '+Proj Edate) # projects with end date
after the range

With this approach, the various sets would be tested and duplicates removed

maybe something like this

(uniq Started Ended (sect StartedBefore EndedAfter))

I might have the logic wrong, but I would wonder if that's faster than
just hitting all the entries and filtering with a typical use case
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to