Hi Ilya,

It's hard to say why rebalancing is not over yet, judging from this log portion alone, but "Group eviction in progress" would probably mean that some partitions were successfully rebalanced and they may now be removed from "old" backup node.
That's how I understand it,According to the visor, there are more backups than the primary ones.

Are you getting the same #'s of partitions every time, or do they change? Any irregularities in your logs other than this one?

At present, the cluster is in the state of no access, so the current log scrolls out the following information about every 2 seconds, basically the same.

We have 5 nodes, 2 of which have the following logs.



Regards,
--
Ilya Kasnacheev


вс, 3 мая 2020 г. в 04:49, 18624049226 <[email protected] <mailto:[email protected]>>:

    Hi community,

    Two nodes are added to a running cluster with persistence enabled,
    after several days, it seems that the rebalancing is not over. The
    log output the following information about every 2 minutes:

    why?How to end the rebalancing?

    [2020-04-30T21:57:49,898][INFO][sys-#28326][PartitionsEvictManager]
    Eviction in progress [permits=0, threads=4, groups=4,
    remainingPartsToEvict=293]
    [2020-04-30T21:57:49,898][INFO][sys-#28326][PartitionsEvictManager]
    Group eviction in progress [grpName=CO_CO_LINE, grpId=-1588248812,
    remainingPartsToEvict=115, partsEvictInProgress=0, totalParts=538]
    [2020-04-30T21:57:49,898][INFO][sys-#28326][PartitionsEvictManager]
    Group eviction in progress [grpName=PI_COM_DAY, grpId=-1904194728,
    remainingPartsToEvict=25, partsEvictInProgress=0, totalParts=448]
    [2020-04-30T21:57:49,898][INFO][sys-#28326][PartitionsEvictManager]
    Group eviction in progress [grpName=CO_CO, grpId=64322847,
    remainingPartsToEvict=114, partsEvictInProgress=6, totalParts=544]
    [2020-04-30T21:57:49,898][INFO][sys-#28326][PartitionsEvictManager]
    Group eviction in progress [grpName=CO_CUST, grpId=1684722246,
    remainingPartsToEvict=39, partsEvictInProgress=0, totalParts=462]
    [2020-04-30T21:57:49,898][INFO][sys-#28326][PartitionsEvictManager]
    Partitions have been scheduled for eviction: [grpId=64322847,
    grpName=CO_CO, eviction=[40, 58, 292, 305, 324, 362, 551, 583,
    591, 601, 669, 701, 726, 741-742, 785, 821, 855, 922, 956, 1009]]
    [2020-04-30T21:58:32,288][INFO][grid-timeout-worker-#39][IgniteKernal]

    Metrics for local node (to disable set 'metricsLogFrequency'to 0)
    ^-- Node [id=9ebed61d, uptime=4days, 19:34:36.448]
    ^-- H/N/C [hosts=5, nodes=5, CPUs=80]
    ^-- CPU [cur=18%, avg=18.24%, GC=0%]
    ^-- PageMemory [pages=36986314]
    ^-- Heap [used=12815MB, free=37.43%, comm=20480MB]
    ^-- Off-heap [used=146170MB, free=28.73%, comm=205000MB]
    ^-- sysMemPlc region [used=0MB, free=99.99%, comm=100MB]
    ^-- default region [used=146170MB, free=28.63%, comm=204800MB]
    ^-- metastoreMemPlc region [used=0MB, free=99.84%, comm=0MB]
    ^-- TxLog region [used=0MB, free=100%, comm=100MB]
    ^-- Ignite persistence [used=134048MB]
    ^-- sysMemPlc region [used=0MB]
    ^-- default region [used=134047MB]
    ^-- metastoreMemPlc region [used=0MB]
    ^-- TxLog region [used=0MB]
    ^-- Outbound messages queue [size=0]
    ^-- Public thread pool [active=0, idle=0, qSize=0]
    ^-- System thread pool [active=16, idle=0, qSize=3]
    ^-- Striped thread pool [active=0, idle=16, qSize=0]
    
[2020-04-30T21:58:49,329][INFO][db-checkpoint-thread-#409][GridCacheDatabaseSharedManager]
    Skipping checkpoint (no pages were modified)
    [checkpointBeforeLockTime=13ms, checkpointLockWait=0ms,
    checkpointListenersExecuteTime=6ms, checkpointLockHoldTime=7ms,
    reason='timeout']
    [2020-04-30T21:59:32,297][INFO][grid-timeout-worker-#39][IgniteKernal]

    Metrics for local node (to disable set 'metricsLogFrequency'to 0)
    ^-- Node [id=9ebed61d, uptime=4days, 19:35:36.457]
    ^-- H/N/C [hosts=5, nodes=5, CPUs=80]
    ^-- CPU [cur=17.9%, avg=18.24%, GC=0%]
    ^-- PageMemory [pages=36986314]
    ^-- Heap [used=8335MB, free=59.3%, comm=20480MB]
    ^-- Off-heap [used=146170MB, free=28.73%, comm=205000MB]
    ^-- sysMemPlc region [used=0MB, free=99.99%, comm=100MB]
    ^-- default region [used=146170MB, free=28.63%, comm=204800MB]
    ^-- metastoreMemPlc region [used=0MB, free=99.84%, comm=0MB]
    ^-- TxLog region [used=0MB, free=100%, comm=100MB]
    ^-- Ignite persistence [used=134048MB]
    ^-- sysMemPlc region [used=0MB]
    ^-- default region [used=134047MB]
    ^-- metastoreMemPlc region [used=0MB]
    ^-- TxLog region [used=0MB]
    ^-- Outbound messages queue [size=0]
    ^-- Public thread pool [active=0, idle=0, qSize=0]
    ^-- System thread pool [active=6, idle=10, qSize=0]
    ^-- Striped thread pool [active=0, idle=16, qSize=0]
    [2020-04-30T21:59:49,898][INFO][sys-#28325][PartitionsEvictManager]
    Eviction in progress [permits=1, threads=4, groups=4,
    remainingPartsToEvict=291]
    [2020-04-30T21:59:49,898][INFO][sys-#28325][PartitionsEvictManager]
    Group eviction in progress [grpName=CO_CO_LINE, grpId=-1588248812,
    remainingPartsToEvict=115, partsEvictInProgress=0, totalParts=538]
    [2020-04-30T21:59:49,898][INFO][sys-#28325][PartitionsEvictManager]
    Group eviction in progress [grpName=PI_COM_DAY, grpId=-1904194728,
    remainingPartsToEvict=25, partsEvictInProgress=0, totalParts=448]
    [2020-04-30T21:59:49,898][INFO][sys-#28325][PartitionsEvictManager]
    Group eviction in progress [grpName=CO_CO, grpId=64322847,
    remainingPartsToEvict=112, partsEvictInProgress=6, totalParts=544]
    [2020-04-30T21:59:49,899][INFO][sys-#28325][PartitionsEvictManager]
    Group eviction in progress [grpName=CO_CUST, grpId=1684722246,
    remainingPartsToEvict=39, partsEvictInProgress=0, totalParts=462]
    [2020-04-30T21:59:49,899][INFO][sys-#28325][PartitionsEvictManager]
    Partitions have been scheduled for eviction: [grpId=64322847,
    grpName=CO_CO, eviction=[40, 58, 292, 305, 324, 362, 551, 583,
    591, 601, 669, 701, 726, 741-742, 785, 821, 855, 922, 956, 1009]]
    [2020-04-30T22:00:32,302][INFO][grid-timeout-worker-#39][IgniteKernal]



Reply via email to