[
https://issues.apache.org/jira/browse/SOLR-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16605181#comment-16605181
]
Varun Thacker commented on SOLR-12744:
--------------------------------------
{code:java}
[master] ~/apache-work/lucene-solr/solr$ grep -nr
"PeerSync\|PeerSyncWithLeader\|RecoveryStrategy"
example/cloud/node2/logs/solr.log
example/cloud/node2/logs/solr.log:199:2018-09-06 02:23:37.317 INFO
(recoveryExecutor-4-thread-2-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard1_replica_n1 c:gettingstarted s:shard1 r:core_node3)
[c:gettingstarted s:shard1 r:core_node3 x:gettingstarted_shard1_replica_n1]
o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true
example/cloud/node2/logs/solr.log:200:2018-09-06 02:23:37.317 INFO
(recoveryExecutor-4-thread-1-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard2_replica_n4 c:gettingstarted s:shard2 r:core_node7)
[c:gettingstarted s:shard2 r:core_node7 x:gettingstarted_shard2_replica_n4]
o.a.s.c.RecoveryStrategy Starting recovery process. recoveringAfterStartup=true
example/cloud/node2/logs/solr.log:201:2018-09-06 02:23:37.383 INFO
(recoveryExecutor-4-thread-1-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard2_replica_n4 c:gettingstarted s:shard2 r:core_node7)
[c:gettingstarted s:shard2 r:core_node7 x:gettingstarted_shard2_replica_n4]
o.a.s.c.RecoveryStrategy startupVersions size=99994 range=[1610805638833635330
to 1610796302151450625]
example/cloud/node2/logs/solr.log:202:2018-09-06 02:23:37.384 INFO
(recoveryExecutor-4-thread-2-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard1_replica_n1 c:gettingstarted s:shard1 r:core_node3)
[c:gettingstarted s:shard1 r:core_node3 x:gettingstarted_shard1_replica_n1]
o.a.s.c.RecoveryStrategy startupVersions size=99993 range=[1610805638833635330
to 1610796301871480833]
example/cloud/node2/logs/solr.log:203:2018-09-06 02:23:37.441 INFO
(recoveryExecutor-4-thread-2-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard1_replica_n1 c:gettingstarted s:shard1 r:core_node3)
[c:gettingstarted s:shard1 r:core_node3 x:gettingstarted_shard1_replica_n1]
o.a.s.c.RecoveryStrategy Begin buffering updates.
core=[gettingstarted_shard1_replica_n1]
example/cloud/node2/logs/solr.log:204:2018-09-06 02:23:37.441 INFO
(recoveryExecutor-4-thread-1-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard2_replica_n4 c:gettingstarted s:shard2 r:core_node7)
[c:gettingstarted s:shard2 r:core_node7 x:gettingstarted_shard2_replica_n4]
o.a.s.c.RecoveryStrategy Begin buffering updates.
core=[gettingstarted_shard2_replica_n4]
example/cloud/node2/logs/solr.log:206:2018-09-06 02:23:37.441 INFO
(recoveryExecutor-4-thread-2-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard1_replica_n1 c:gettingstarted s:shard1 r:core_node3)
[c:gettingstarted s:shard1 r:core_node3 x:gettingstarted_shard1_replica_n1]
o.a.s.c.RecoveryStrategy Publishing state of core
[gettingstarted_shard1_replica_n1] as recovering, leader is
[http://192.168.0.3:8983/solr/gettingstarted_shard1_replica_n2/] and I am
[http://192.168.0.3:7574/solr/gettingstarted_shard1_replica_n1/]
example/cloud/node2/logs/solr.log:208:2018-09-06 02:23:37.441 INFO
(recoveryExecutor-4-thread-1-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard2_replica_n4 c:gettingstarted s:shard2 r:core_node7)
[c:gettingstarted s:shard2 r:core_node7 x:gettingstarted_shard2_replica_n4]
o.a.s.c.RecoveryStrategy Publishing state of core
[gettingstarted_shard2_replica_n4] as recovering, leader is
[http://192.168.0.3:8983/solr/gettingstarted_shard2_replica_n6/] and I am
[http://192.168.0.3:7574/solr/gettingstarted_shard2_replica_n4/]
example/cloud/node2/logs/solr.log:211:2018-09-06 02:23:37.450 INFO
(recoveryExecutor-4-thread-1-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard2_replica_n4 c:gettingstarted s:shard2 r:core_node7)
[c:gettingstarted s:shard2 r:core_node7 x:gettingstarted_shard2_replica_n4]
o.a.s.c.RecoveryStrategy Sending prep recovery command to
[http://192.168.0.3:8983/solr]; [WaitForState:
action=PREPRECOVERY&core=gettingstarted_shard2_replica_n6&nodeName=192.168.0.3:7574_solr&coreNodeName=core_node7&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]
example/cloud/node2/logs/solr.log:212:2018-09-06 02:23:37.450 INFO
(recoveryExecutor-4-thread-2-processing-n:192.168.0.3:7574_solr
x:gettingstarted_shard1_replica_n1 c:gettingstarted s:shard1 r:core_node3)
[c:gettingstarted s:shard1 r:core_node3 x:gettingstarted_shard1_replica_n1]
o.a.s.c.RecoveryStrategy Sending prep recovery command to
[http://192.168.0.3:8983/solr]; [WaitForState:
action=PREPRECOVERY&core=gettingstarted_shard1_replica_n2&nodeName=192.168.0.3:7574_solr&coreNodeName=core_node3&state=recovering&checkLive=true&onlyIfLeader=true&onlyIfLeaderActive=true]{code}
Update patch. This is what the logs will look like after the changes
I made a couple of additional changes in that the "Caching fingerprint" message
would pop up lots of times as it's per segment and for all cores. So the logs
get polluted . I didn't find that information useful either for debugging .
Happy to change it back if you think otherwise.
> A high numRecordsToKeep can flood the info log message printing all the
> versions
> --------------------------------------------------------------------------------
>
> Key: SOLR-12744
> URL: https://issues.apache.org/jira/browse/SOLR-12744
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Reporter: Varun Thacker
> Priority: Major
> Attachments: SOLR-12744.patch, SOLR-12744.patch, one_log_line.log
>
>
> I was doing some tests around PeerSync to test how stable is it with the
> recent fixes.
> This log entry can flood the log printing all the versions that
> numRecordsToKeep was set to (100k) in my testing
> {code:java}
> log.info("###### startupVersions=[{}]", startingVersions);{code}
> This one log line wrote out almost a 900KB entry
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]