Hi Mark, I can't help with reducing filesizes, but I'm curious...
What sort of documents were you storing, number of fields, average document size, many dynamic fields or mainly all static? It would be good to hear about a real-world large-scale index in terms of response times, did the server have enough RAM to store it all in memory? Cheers, Rob On Tue 29/12/09 18:23 , markwaddle <m...@markwaddle.com> wrote: > I have an index that used to have ~38M docs at 17.2GB. I deleted all > but 13K > docs using a delete by query, commit and then optimize. A "*:*" > query now > returns 13K docs. The problem is that the files on disk are still > 17.1GB in > size. I expected the optimize to shrink the files. Is there a way I > can > shrink them now that the index only has 13K docs? > Mark > -- > View this message in context: > http://old.nabble.com/Delete%2C-commit%2C-optimize-doesn%27t-reduce-index-f > ile-size-tp26958067p26958067.htmlSent from the Solr - User mailing list > archive at Nabble.com. > > Message sent via Atmail Open - http://atmail.org/