Re: Solr Collection Reload

2020-12-16 Thread Moulay Hicham
Thanks Erick - The logs did NOT provide any errors. Thanks for the
thread dump suggestion.

The issue was resolved when we restarted all the nodes.

Also, I am wondering if restarting the leaders would have been
sufficient. I will try this next time this happens.



On Tue, Dec 15, 2020 at 5:21 AM Erick Erickson  wrote:
>
> Well, there’s no information here to help.
>
> The first thing I’d check is what the Solr
> logs are saying. Especially if you’ve
> changed any of your configuration files.
>
> If that doesn’t show anything, I'd take a thread
> dump and look at that, perhaps there’s some
> deadlock.
>
> But that said, a reload shouldn’t take more time
> than a startup…
>
> Best,
> Erick
>
> > On Dec 14, 2020, at 5:44 PM, Moulay Hicham  wrote:
> >
> > Hi,
> >
> > I have an issue with the collection reload API. The reload seems to be
> > hanging. It's been in the running state for many days.
> >
> > Can you please suggest any documentation which explains the reload
> > task under the hood steps?
> >
> > FYI. I am using solr 8.1
> >
> > Thanks,
> >
> > Moulay
>


Solr Collection Reload

2020-12-14 Thread Moulay Hicham
Hi,

I have an issue with the collection reload API. The reload seems to be
hanging. It's been in the running state for many days.

Can you please suggest any documentation which explains the reload
task under the hood steps?

FYI. I am using solr 8.1

Thanks,

Moulay


Collection reload issue

2020-12-12 Thread Moulay Hicham
Hi,

I am using solr 8.1

I ran a collection reload operation:
admin/collections?action=RELOAD&name=&async=1000'.

when I checked on the task status, it shows that it's still running:
admin/collections?action=REQUESTSTATUS&requestid=1000'

It's been showing that this task is in running state for the last few days.

I am not why the task shows as running. Is it hanging? How can I tell
what's the real status of the task (running, failed or completed)?

Please advise,

Thanks,
Moulay


Re: Solr Collection reload task has been in running state for a very long time

2020-12-11 Thread Moulay Hicham
Hi,

I will really appreciate if someone can help me with this.

Thank you,

Moulay

On Thu, Dec 10, 2020, 8:28 AM Moulay Hicham  wrote:

> Hi,
>
> We have a solr cluster of 30 nodes with a Replication Factor =3.
> Each index size is about 80GB.
> Solr version is 8.1
> The cluster has high TPS both in read and write.
>
> We have recently made a schema change and uploaded it using ZKCLI
> script. Then we issue a collection reload async request:
> admin/collections?action=RELOAD&name=&async=1000'
>
> When we check on the status of this request, it shows that it's still
> running:
>
> admin/collections?action=REQUESTSTATUS&requestid=1000'
> {
>   "responseHeader":{
> "status":0,
> "QTime":1},
>   "status":{
> "state":"running",
> "msg":"found [1000] in running tasks"}}
>
> This task has been in a running state for about 5 hours so far. I am
> not sure if this is expected or the status of this task failed or
> completed but never reported back to zookeeper.
>
> Also if running for that long - is it because the index is being
> actively (with high TPS) updated? We have a softcommit of 10s and
> hadcommit of 60s.
>
> Please help me understand what's going on.
>
> Thanks,
> Moulay
>


Solr Collection reload task has been in running state for a very long time

2020-12-10 Thread Moulay Hicham
Hi,

We have a solr cluster of 30 nodes with a Replication Factor =3.
Each index size is about 80GB.
Solr version is 8.1
The cluster has high TPS both in read and write.

We have recently made a schema change and uploaded it using ZKCLI
script. Then we issue a collection reload async request:
admin/collections?action=RELOAD&name=&async=1000'

When we check on the status of this request, it shows that it's still running:

admin/collections?action=REQUESTSTATUS&requestid=1000'
{
  "responseHeader":{
"status":0,
"QTime":1},
  "status":{
"state":"running",
"msg":"found [1000] in running tasks"}}

This task has been in a running state for about 5 hours so far. I am
not sure if this is expected or the status of this task failed or
completed but never reported back to zookeeper.

Also if running for that long - is it because the index is being
actively (with high TPS) updated? We have a softcommit of 10s and
hadcommit of 60s.

Please help me understand what's going on.

Thanks,
Moulay


Collection reload task is taking long time

2020-12-09 Thread Moulay Hicham
Hi,

We have a solr cluster of 30 nodes with a Replication Factor =3.
Each index size is about 80GB.
Solr version is 8.1
The cluster has high TPS both in read and write.

We have recently made a schema change and uploaded it using ZKCLI script.
Then we issue a collection reload async request:
admin/collections?action=RELOAD&name=&async=1000'

When we check on the status of this request, it shows that it's still
running:

admin/collections?action=REQUESTSTATUS&requestid=1000'

{

  "responseHeader":{

"status":0,

"QTime":1},

  "status":{

"state":"running",

"msg":"found [1000] in running tasks"}}

This task has been in a running state for *about 5 hours* so far. I am not
sure if this is expected or the status of this task failed or completed but
never reported back to zookeeper.

Also if running for that long - is it because the index is being actively
(with high TPS) updated? We have a softcommit of 10s and hadcommit of 60s.

Please help me understand what's going on.

Thanks,
Moulay


Re: TieredMergePolicyFactory question

2020-10-26 Thread Moulay Hicham
Thanks Shawn and Erick.

So far I haven't noticed any performance issues before and after the change.

My concern all along is COST. We could have left the configuration as is -
keeping the deleting documents in the index - But we have to scale up our
Solr cluster.  This will double our Solr Cluster Cost. And the additional
COST is what we are trying to avoid.

I will test the expungeDeletes and revert the max segment size back to 5G.

Thanks again,

Moulay

On Mon, Oct 26, 2020 at 5:49 AM Erick Erickson 
wrote:

> "Some large segments were merged into 12GB segments and
> deleted documents were physically removed.”
> and
> “So with the current natural merge strategy, I need to update
> solrconfig.xml
> and increase the maxMergedSegmentMB often"
>
> I strongly recommend you do not continue down this path. You’re making a
> mountain out of a mole-hill. You have offered no proof that removing the
> deleted documents is noticeably improving performance. If you replace
> docs randomly, deleted docs will be removed eventually with the default
> merge policy without you doing _anything_ special at all.
>
> The fact that you think you need to continuously bump up the size of
> your segments indicates your understanding is incomplete. When
> you start changing settings basically at random in order to “fix” a
> problem,
> especially one that you haven’t demonstrated _is_ a problem, you
> invariably make the problem worse.
>
> By making segments larger, you’ve increased the work Solr (well Lucene) has
> to do in order to merge them since the merge process has to handle these
> larger segments. That’ll take longer. There are a fixed number of threads
> that do merging. If they’re all tied up, incoming updates will block until
> a thread frees up. I predict that if you continue down this path,
> eventually
> your updates will start to misbehave and you’ll spend a week trying to
> figure
> out why.
>
> If you insist on worrying about deleted documents, just expungeDeletes
> occasionally. I’d also set the segments size back to the default 5G. I
> can’t
> emphasize strongly enough that the way you’re approaching this will lead
> to problems, not to mention maintenance that is harder than it needs to
> be. If you do set the max segment size back to 5G, your 12G segments will
> _not_ merge until they have lots of deletes, making your problem worse.
> Then you’ll spend time trying to figure out why.
>
> Recovering from what you’ve done already has problems. Those large segments
> _will_ get rewritten (we call it “singleton merge”) when they’ve
> accumulated a
> lot of deletes, but meanwhile you’ll think that your problem is getting
> worse and worse.
>
> When those large segments have more than 10% deleted documents,
> expungeDeletes
> will singleton merge them and they’ll gradually shrink.
>
> So my prescription is:
>
> 1> set the max segment size back to 5G
>
> 2> monitor your segments. When you see your large segments  > 5G have
> more than 10% deleted documents, issue an expungeDeletes command (not
> optimize).
> This will recover your index from the changes you’ve already made.
>
> 3> eventually, all your segments will be under 5G. When that happens, stop
> issuing expungeDeletes.
>
> 4> gather some performance statistics and prove one way or another that as
> deleted
> docs accumulate over time, it impacts performance. NOTE: after your last
> expungeDeletes, deleted docs will accumulate over time until they reach a
> plateau and
> shouldn’t continue increasing after that. If you can _prove_ that
> accumulating deleted
> documents affects performance, institute a regular expungeDeletes.
> Optimize, but
> expungeDeletes is less expensive and on a changing index expungeDeletes is
> sufficient. Optimize is only really useful for a static index, so I’d
> avoid it in your
> situation.
>
> Best,
> Erick
>
> > On Oct 26, 2020, at 1:22 AM, Moulay Hicham 
> wrote:
> >
> > Some large segments were merged into 12GB segments and
> > deleted documents were physically removed.
>
>


Re: TieredMergePolicyFactory question

2020-10-25 Thread Moulay Hicham
Thanks so much for clarifying. I have deployed the change to prod and seems
to be working. Some large segments were merged into 12GB segments and
deleted documents were physically removed.

I am wondering about 3 other things:

1 - You mentioned that I need free disk space. Just to make sure that we
are talking about disc space here. RAM can still remain at the same size?
My current RAM size is  Index size < RAM < 1.5 Index size

2 - When the merge is happening, it happens in disc and when it's
completed, then the data is sync'ed with RAM. I am just guessing here ;-).
I couldn't find a good explanation online about this.

3 - Also I am wondering what recommendation you have for continuously
purging deleted documents. optimize? expungeDeletes? Natural Merge?
Here are more details about the need to purge documents.
My solr cluster is very expensive. So we would like to maintain the cost
and avoid scaling up if possible.
The solr index is being written at a rate > 100 TPS
Also we have a requirement to delete old data. So we are
continuously trimming millions of documents daily that are older than X
years.
So with the current natural merge strategy, I need to update solrconfig.xml
and increase the maxMergedSegmentMB often. So that I can reclaim physical
disc space.

Wondering if a feature of rewriting one single large merged segment into
another segment - and purging deleted documents in this process - can be
useful for use cases like mine. This will help purge deleted documents
without the need of continuously increasing the maxMergedSegmentMB.

Thanks,
Moulay











On Fri, Oct 23, 2020 at 11:10 AM Erick Erickson 
wrote:

> Well, you mentioned that the segments you’re concerned were merged a year
> ago.
> If segments aren’t being merged, they’re pretty static.
>
> There’s no real harm in optimizing _occasionally_, even in an NRT index.
> If you have
> segments that were merged that long ago, you may be indexing continually
> but it
> sounds like it’s a situation where you update more recent docs rather than
> random
> ones over the entire corpus.
>
> That caution is more for indexes where you essentially replace docs in your
> corpus randomly, and it’s really about wasting a lot of cycles rather than
> bad stuff happening. When you randomly update documents (or delete them),
> the extra work isn’t worth it.
>
> Either operation will involve a lot of CPU cycles and can require that you
> have
> at least as much free space on your disk as the indexes occupy, so do be
> aware
> of that.
>
> All that said, what evidence do you have that this is worth any effort at
> all?
> Depending on the environment, you may not even be able to measure
> performance changes so this all may be irrelevant anyway.
>
> But to your question. Yes, you can cause regular merging to more
> aggressively
> merge segments with deleted docs by setting the
> deletesPctAllowed
> in solroconfig.xml. The default value is 33, and you can set it as low as
> 20 or as
> high as 50. We put
> a floor of 20% because the cost starts to rise quickly if it’s lower than
> that, and
> expungeDeletes is a better alternative at that point.
>
> This is not a hard number, and in practice the percentage of you index
> that consists
> of deleted documents tends to be lower than this number, depending of
> course
> on your particular environment.
>
> Best,
> Erick
>
> > On Oct 23, 2020, at 12:59 PM, Moulay Hicham 
> wrote:
> >
> > Thanks Eric.
> >
> > My index is near real time and frequently updated.
> > I checked this page
> >
> https://lucene.apache.org/solr/guide/8_1/uploading-data-with-index-handlers.html#xml-update-commands
> > and using forceMerge/expungeDeletes are NOT recommended.
> >
> > So I was hoping that the change in mergePolicyFactory will affect the
> > segments with high percent of deletes as part of the REGULAR segment
> > merging cycles. Is my understanding correct?
> >
> >
> >
> >
> > On Fri, Oct 23, 2020 at 9:47 AM Erick Erickson 
> > wrote:
> >
> >> Just go ahead and optimize/forceMerge, but do _not_ optimize to one
> >> segment. Or you can expungeDeletes, that will rewrite all segments with
> >> more than 10% deleted docs. As of Solr 7.5, these operations respect
> the 5G
> >> limit.
> >>
> >> See:
> https://lucidworks.com/post/solr-and-optimizing-your-index-take-ii/
> >>
> >> Best
> >> Erick
> >>
> >> On Fri, Oct 23, 2020, 12:36 Moulay Hicham 
> wrote:
> >>
> >>> Hi,
> >>>
> >>> I am using solr 8.1 in production. We have about 30%-50% of deleted
> >>> do

Re: TieredMergePolicyFactory question

2020-10-23 Thread Moulay Hicham
Thanks Eric.

My index is near real time and frequently updated.
I checked this page
https://lucene.apache.org/solr/guide/8_1/uploading-data-with-index-handlers.html#xml-update-commands
and using forceMerge/expungeDeletes are NOT recommended.

So I was hoping that the change in mergePolicyFactory will affect the
segments with high percent of deletes as part of the REGULAR segment
merging cycles. Is my understanding correct?




On Fri, Oct 23, 2020 at 9:47 AM Erick Erickson 
wrote:

> Just go ahead and optimize/forceMerge, but do _not_ optimize to one
> segment. Or you can expungeDeletes, that will rewrite all segments with
> more than 10% deleted docs. As of Solr 7.5, these operations respect the 5G
> limit.
>
> See: https://lucidworks.com/post/solr-and-optimizing-your-index-take-ii/
>
> Best
> Erick
>
> On Fri, Oct 23, 2020, 12:36 Moulay Hicham  wrote:
>
> > Hi,
> >
> > I am using solr 8.1 in production. We have about 30%-50% of deleted
> > documents in some old segments that were merged a year ago.
> >
> > These segments size is about 5GB.
> >
> > I was wondering why these segments have a high % of deleted docs and
> found
> > out that they are NOT being candidates for merging because the
> > default TieredMergePolicy maxMergedSegmentMB is 5G.
> >
> > So I have modified the TieredMergePolicyFactory config as below to
> > lower the delete docs %
> >
> >  class="org.apache.solr.index.TieredMergePolicyFactory">
> >   10
> >   10
> >   12000
> >   20
> > 
> >
> >
> > Do you see any issues with increasing the max merged segment to 12GB and
> > lowered the deletedPctAllowed to 20%?
> >
> > Thanks,
> >
> > Moulay
> >
>


TieredMergePolicyFactory question

2020-10-23 Thread Moulay Hicham
Hi,

I am using solr 8.1 in production. We have about 30%-50% of deleted
documents in some old segments that were merged a year ago.

These segments size is about 5GB.

I was wondering why these segments have a high % of deleted docs and found
out that they are NOT being candidates for merging because the
default TieredMergePolicy maxMergedSegmentMB is 5G.

So I have modified the TieredMergePolicyFactory config as below to
lower the delete docs %


  10
  10
  12000
  20



Do you see any issues with increasing the max merged segment to 12GB and
lowered the deletedPctAllowed to 20%?

Thanks,

Moulay