On Jul 10, 2011, at 6:10 PM, Lee, David wrote:

> I'm experimenting with Range Indexes over dateTime and dayTimeDuration values.
> I have a large number (millions) of small documents/fragments each with a 
> dateTime and/or dayTimeDuration.
>  
> Currently these are in millisecond accuracy.  For most things I dont need the 
> ms accuracy but its useful on occasion.     I am wondering is there a 
> detrimental effect of this precision ?

I've never empirically tested these things, but since you didn't get another 
answer, I'll jump in.

If you're storing as dateTime or dayTimeDuration I doubt you'll see a 
worthwhile advantage in speed or memory by rounding to seconds.

> However if I truncate the dateTime to seconds then there will be vastly fewer 
> unique values .
>  
> I am curious what the effect, if any, would be of doing this.   Does the size 
> or search time of the range indexes depend on the number of unique values ? 
> or more on the number of fragments ?

Number of fragments * number of values per fragment.

> I am thinking it would have to depend on both as it needs to map   value -> 
> (set of fragments).
> So what is the difference if the common case is nearly 1:1 value:fragment vs  
>  1:many value: fragment ?

Think of it as an array of structs, each entry an id and value.  Each has a 
fixed size in memory.

-jh-

_______________________________________________
General mailing list
[email protected]
http://developer.marklogic.com/mailman/listinfo/general

Reply via email to