On 1/13/2015 12:10 AM, ig01 wrote:
> Unfortunately this is the case, we do have hundreds of millions of documents
> on one
> Solr instance/server. All our configs and schema are with default
> configurations. Our index
> size is 180G, does that mean that we need at least 180G heap size?
If you ha
:
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4179122.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 1/10/2015 11:46 PM, ig01 wrote:
> Thank you all for your response,
> The thing is that we have 180G index while half of it are deleted documents.
> We tried to run an optimization in order to shrink index size but it
> crashes on ‘out of memory’ when the process reaches 120G.
> Is it possibl
ory we need for 180G optimization?
Is every update deletes the document and creates a new one?
How can commit with expungeDeletes=true affect performance?
Currently we do not have a performance issue.
Thanks in advance.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Fre
mization in order to reduce
> index size?
>
> Thanks in advance.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178720.html
> Sent from the Solr - User mailing list archive at Nabble.com.
in order to shrink index size but it
>> crashes on ‘out of memory’ when the process reaches 120G.
>> Is it possible to optimize parts of the index?
>> Please advise what can we do in this situation.
>>
>>
>>
>>
>> --
>> View this message in context:
&g
> crashes on ‘out of memory’ when the process reaches 120G.
> Is it possible to optimize parts of the index?
> Please advise what can we do in this situation.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Frequent-deletions-tp417668
option for us, all the documents in our index have same deletion
> probability.
> Is there any other solution to perform an optimization in order to reduce
> index size?
>
> Thanks in advance.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.
Hi,
It's not an option for us, all the documents in our index have same deletion
probability.
Is there any other solution to perform an optimization in order to reduce
index size?
Thanks in advance.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Frequent-dele
ss reaches 120G.
> Is it possible to optimize parts of the index?
> Please advise what can we do in this situation.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178700.html
> Sent from the S
zation in order to shrink index size but it
> crashes on ‘out of memory’ when the process reaches 120G.
> Is it possible to optimize parts of the index?
> Please advise what can we do in this situation.
>
>
>
>
> --
> View this message in context:
> http://lucene.
advise what can we do in this situation.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689p4178700.html
Sent from the Solr - User mailing list archive at Nabble.com.
Well, we are doing same thing(in a way). we have to do frequent deletions in
mass, at a time we are deleting around 20M+ documents.All i am doing is after
deletion i am firing the below command on each of our solr node and keep some
patience as it take way much time.
curl -vvv
"http://
issue a "force merge" (aka optimize) command from the URL (Or
>> cUrl etc) as:
>> http://localhost:8983/solr/techproducts/update?optimize=true
>>
>> But please don't do this unless it's absolutely necessary. You state
>> that you have "fre
s=true
>
> and if that isn't enough, try an optimize call
> you can issue a "force merge" (aka optimize) command from the URL (Or
> cUrl etc) as:
> http://localhost:8983/solr/techproducts/update?optimize=true
>
> But please don't do this unless it
force merge" (aka optimize) command from the URL (Or
cUrl etc) as:
http://localhost:8983/solr/techproducts/update?optimize=true
But please don't do this unless it's absolutely necessary. You state
that you have "frequent deletions", but eventually this shoul dall
happen
Hello,
We perform frequent deletions from our index, which greatly increases the
index size.
How can we perform an optimization in order to reduce the size.
Please advise,
Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Frequent-deletions-tp4176689.html
Sent from
17 matches
Mail list logo