: This works well when the number of fields is small, but what are the
: performance ramifications when the number of fields is more than 1000? 
: Is this a serious performance killer? If yes, what would we need to
: counter act it, more RAM or faster CPU's? Or both?

the performance characteristics of having 1000 "fields" should bethe same 
regardless of wether those fields are explicitly named in your schema, or 
created on the fly becuase of dynamic field declarations ... it might be 
more expensive to query 1000 fields then it is to query 10 fields, but the 
dynamic nature of it isn't going to matter -- it's the number of clauses 
that make the differnece (1000 clauses on one field is going to have about 
hte same characteristics)

: Is it better to copy all fields to a content field and then always
: search there? This works, but then it is hard to boost specific field
: values. and that is what we want to do. 

you can always do both ... the schemas i work with tend to have several 
"aggregation" fields, many of which are populate using copyField with 
dynamic field patterns for the "source" ... you can still query on 
specific fields (with high boosts) in addition to querying on the 
aggregated fields.

FWIW: the main index i worry about has well over 1000 fields once you 
consider all the dynamic fields.  I think the last time i looked it was 
about 6000 ... the only thing i worry about is making usre i have 
omitNorm="true" on any dynimc field whose cardinality i can't garuntee 
will be "small" (ie: 2-10).

I use request handlers that execute 100-300 "queries" for each "request" 
against those dynamic fields .. but each individual query typically only 
has 1-10 clauses in it.


-Hoss

Reply via email to