Sorry Venkat, this is pushing beyond my immediate knowledge. You'd just need to experiment.
But the document still looks a bit wrong, specifically I don't understand where those extra "366" values are coming from. It should be just a two-dimensional coordinates, first one for start of the range, second for the end. You seem to have 2 extra useless ones. Regards, Alex. ---- Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter: http://www.solr-start.com/ On 21 August 2015 at 21:29, vaedama <sudheer.u...@gmail.com> wrote: > Alexandre, > > Fantastic answer! I think having a start position would work nicely with my > use-case :) Also I would prefer to do the date Math during indexing. > > *Question # 1:* Can you please tell me if this doc looks correct (given that > I am not yet bothered about factoring in "year" into my use-case) ? > > Student "X" was `absent` between dates: > > Jan 1, 2015 and Jan 15, 2015 > Feb 13, 2015 and Feb 16, 2015 (assuming that Feb 13 is 43rd day in the > year 2015 and Feb 16 is 46th day) > March 19, 2015 and March 25, 2015 > > Also "X" was `present` between dates: > > Jan 25, 2015 and Jan 30, 2015 > Feb 1, 2015 and Feb 12, 2015 > > { > id: "X", > state: ["absent", "present"], > presentDays: [ [01 15 366 366], [43, 46, 366, 366], [78, 84, 366, 366] ], > absentDays: [ [25, 30, 366, 366], [32, 43, 366, 366] ] > } > > *Question #2:* > > Since I need timestamp level granularity, what is the appropriate way to > store the field ? > > Student "X" was `absent` between epoch times: > > 1420104600 (9:30 AM, Jan 1 2015) and 1421341200 (5:00 PM, Jan 15, 2015) > > Is it possible to change *worldBounds* to take a polygon structure where I > can represent millisecond level granularity ? > > Thanks in advance, > Venkat Sudheer Reddy Aedama > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Solr-How-to-index-range-pair-fields-tp4224369p4224582.html > Sent from the Solr - User mailing list archive at Nabble.com.