I still like this approach, but I've discovered one wrinkle, which is
that I have dates in my dataset dated at the epoch (i.e. midnight Jan
1, 1970), as well as before the epoch (e.g. midnight Jan 1, 1950).

The docs dated *before* the epoch so far don't seem to be a problem;
they end up having a negative numeric value, but that seems workable.

The docs dated exactly *at* the epoch, though, are trouble, because I
can't tell those docs apart from the undated docs in my function
query. (Both end up with numeric value of 0 in the function query
code. So missing date is the same as midnight Jan 1, 1970.)

So far, in my case, the best bet seems to be changing the time
component of my dates. So rather than rounding dates to the nearest
midnight (e.g. 1970-01-01T00:00:00Z), I could round them to the
nearest, say, 1AM (e.g. 1970-01-01T01:00:00Z), with the goal of making
sure that none of my legitimate date field values will evaluate to
numeric value 0. Since I don't show the time component of dates to my
users, I don't think this would cause any real trouble. It feels
slightly unclean, though.

On Thu, Apr 8, 2010 at 1:05 PM, Chris Harris <rygu...@gmail.com> wrote:
> If anyone is curious, I've created a patch that creates a variant of
> map that can be used in the way indicated below. See
> http://issues.apache.org/jira/browse/SOLR-1871
>
> On Wed, Apr 7, 2010 at 3:41 PM, Chris Harris <rygu...@gmail.com> wrote:
>
>> Option 1. Use map
>>
>> The most obvious way to do this would be to wrap the reference to
>> mydatefield inside a map, like this:
>>
>>    recip(ms(NOW,map(mydatefield,0,0,ms(NOW)),3.16e-11,1,1))
>>
>> However, this throws an exception because map is hard-coded to take
>> float constants, rather than arbitrary subqueries.
>

Reply via email to