Re: Frequent deletions

2015-01-13 Thread Shawn Heisey
On 1/13/2015 12:10 AM, ig01 wrote: > Unfortunately this is the case, we do have hundreds of millions of documents > on one > Solr instance/server. All our configs and schema are with default > configurations. Our index > size is 180G, does that mean that we need at least 180G heap size? If you ha

Re: Frequent deletions

2015-01-12 Thread ig01
: http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4179122.html Sent from the Solr - User mailing list archive at Nabble.com.

Re: Frequent deletions

2015-01-12 Thread Shawn Heisey
On 1/10/2015 11:46 PM, ig01 wrote: > Thank you all for your response, > The thing is that we have 180G index while half of it are deleted documents. > We tried to run an optimization in order to shrink index size but it > crashes on ‘out of memory’ when the process reaches 120G. > Is it possibl

Re: Frequent deletions

2015-01-12 Thread ig01
ory we need for 180G optimization? Is every update deletes the document and creates a new one? How can commit with expungeDeletes=true affect performance? Currently we do not have a performance issue. Thanks in advance. -- View this message in context: http://lucene.472066.n3.nabble.com/Fre

Re: Frequent deletions

2015-01-11 Thread David Santamauro
mization in order to reduce > index size? > > Thanks in advance. > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178720.html > Sent from the Solr - User mailing list archive at Nabble.com.

Re: Frequent deletions

2015-01-11 Thread Erick Erickson
in order to shrink index size but it >> crashes on ‘out of memory’ when the process reaches 120G. >> Is it possible to optimize parts of the index? >> Please advise what can we do in this situation. >> >> >> >> >> -- >> View this message in context: &g

Re: Frequent deletions

2015-01-11 Thread Jack Krupansky
> crashes on ‘out of memory’ when the process reaches 120G. > Is it possible to optimize parts of the index? > Please advise what can we do in this situation. > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Frequent-deletions-tp417668

Re: Frequent deletions

2015-01-11 Thread Alexandre Rafalovitch
option for us, all the documents in our index have same deletion > probability. > Is there any other solution to perform an optimization in order to reduce > index size? > > Thanks in advance. > > > > -- > View this message in context: > http://lucene.472066.n3.

Re: Frequent deletions

2015-01-11 Thread ig01
Hi, It's not an option for us, all the documents in our index have same deletion probability. Is there any other solution to perform an optimization in order to reduce index size? Thanks in advance. -- View this message in context: http://lucene.472066.n3.nabble.com/Frequent-dele

Re: Frequent deletions

2015-01-11 Thread Michał B . .
ss reaches 120G. > Is it possible to optimize parts of the index? > Please advise what can we do in this situation. > > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178700.html > Sent from the S

Re: Frequent deletions

2015-01-11 Thread Jürgen Wagner (DVT)
zation in order to shrink index size but it > crashes on ‘out of memory’ when the process reaches 120G. > Is it possible to optimize parts of the index? > Please advise what can we do in this situation. > > > > > -- > View this message in context: > http://lucene.

Re: Frequent deletions

2015-01-11 Thread ig01
advise what can we do in this situation. -- View this message in context: http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178700.html Sent from the Solr - User mailing list archive at Nabble.com.

RE: Frequent deletions

2015-01-06 Thread Amey Jadiye
Well, we are doing same thing(in a way). we have to do frequent deletions in mass, at a time we are deleting around 20M+ documents.All i am doing is after deletion i am firing the below command on each of our solr node and keep some patience as it take way much time. curl -vvv "http://

Re: Frequent deletions

2015-01-01 Thread Alexandre Rafalovitch
issue a "force merge" (aka optimize) command from the URL (Or >> cUrl etc) as: >> http://localhost:8983/solr/techproducts/update?optimize=true >> >> But please don't do this unless it's absolutely necessary. You state >> that you have "fre

Re: Frequent deletions

2015-01-01 Thread Michael McCandless
s=true > > and if that isn't enough, try an optimize call > you can issue a "force merge" (aka optimize) command from the URL (Or > cUrl etc) as: > http://localhost:8983/solr/techproducts/update?optimize=true > > But please don't do this unless it&#x

Re: Frequent deletions

2014-12-31 Thread Erick Erickson
force merge" (aka optimize) command from the URL (Or cUrl etc) as: http://localhost:8983/solr/techproducts/update?optimize=true But please don't do this unless it's absolutely necessary. You state that you have "frequent deletions", but eventually this shoul dall happen

Frequent deletions

2014-12-31 Thread ig01
Hello, We perform frequent deletions from our index, which greatly increases the index size. How can we perform an optimization in order to reduce the size. Please advise, Thanks. -- View this message in context: http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689.html Sent from