It Depends (tm). Storing data in a Solr index pretty much just consumes disk space, the *.fdt and *.fdx files aren't really germane to the amount of memory needed for search. There will be some additional memory requirements for the document cache though. And you'll also have resources consumed if you use the &fl=*, there's more disk seeking going on to fetch the fields.
All that said, you haven't really told us anything about the size of your corpus, documents, etc. so it's hard to say. The other advantage of storing all your original fields (i.e. NOT the destination of copyFields) is that you can take advantage of atomic updates which allow you to update parts of documents if you want. Best advice: Try it and see. Best Erick On Tue, Feb 19, 2013 at 9:35 AM, jacobmarcus20 <jacob.mar...@gmail.com>wrote: > Hi all, > > I typically do not store a lot of attributes of the entity that I am > indexing. Then upon search, I fetch the ids of the entity from the index > and > then use that id to look-up a distributed cache like memcached/couchbase > etc. This pattern has worked fine for me in the past. > > But I can avoid the distributed cache in the architecture, if I store all > the necessary attributes in the document itself. Is this a good idea? > > One thing this approach would do is to make the specific document large and > the required document cache larger. The other downside I can think of is > that the performance may be affected by storing and then retrieving a lot > of > attributes. I think the number of updates of the index will also increase a > lot as we have more attributes that can change that causes a need to > update. > Does this matter? > > What do folks feel about this approach? Assume we are talking about a > complex entity with more than 100 attributes. > > Thanks, > Jacob > > > > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Storing-all-attributes-in-the-document-so-that-I-can-avoid-a-distributed-cache-tp4041293.html > Sent from the Solr - User mailing list archive at Nabble.com. >