Re: Remove the ignite node from cluster.

2018-04-30 Thread takumi
Thank you for reply.

If I remove ignite instance from the cluster, just kill Java process which
is the ignite instance?
I want to know the correct way to shutdown.

In addition, if scaled-in is occured, when should I update the baseline
topology?
I want to know when must I updates baseline topology and when it re-balance.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Using a cache as an affinity co-located processing buffer in Ignite.Net

2018-04-30 Thread Raymond Wilson
Cross posting to dev list for comment on cache interceptor availability on
Ignite .Net client.

-Original Message-
From: Raymond Wilson [mailto:raymond_wil...@trimble.com]
Sent: Saturday, April 28, 2018 10:35 PM
To: 'user@ignite.apache.org' 
Subject: RE: Using a cache as an affinity co-located processing buffer in
Ignite.Net

Further investigation shows CacheInterceptor is not a part of the
Ignite.NET API.

Is there a plan/ticket for this to be done?

-Original Message-
From: Raymond Wilson [mailto:raymond_wil...@trimble.com]
Sent: Saturday, April 28, 2018 1:08 PM
To: user@ignite.apache.org
Subject: Re: Using a cache as an affinity co-located processing buffer in
Ignite.Net

Val,

Are the interceptors invoked in the affinity co-located context of the
item? The help is a little unclear on that.

Thanks,
Raymond.

Sent from my iPhone

> On 28/04/2018, at 12:12 PM, vkulichenko 
wrote:
>
> Raymond,
>
> If you go with approach I described above, I would actually recommend to
use
> interceptors:
>
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/
CacheInterceptor.html
>
> Continuous query seems to be a bit cumbersome for this.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Remove the ignite node from cluster.

2018-04-30 Thread Evgenii Zhuravlev
Hi,

I think the safest way to do it is to remove nodes on by one from the
cluster and wait till rebalance finished after each node(of course, if you
have backups). But you should remember, that in case of huge amount of
data, rebalance could take some time, which could be significant when you
want to scale-out again.

Evgenii

2018-04-30 17:51 GMT+03:00 takumi :

> I am trying to create a highly scalability system.
> When the system becomes low load, I scale-in system.
> At this time, I want to remove the node correctly from ignite cluster.
> In order to transfer the data and backup of that node to another node
> automatically, how should I do?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Remove the ignite node from cluster.

2018-04-30 Thread takumi
I am trying to create a highly scalability system.
When the system becomes low load, I scale-in system.
At this time, I want to remove the node correctly from ignite cluster.
In order to transfer the data and backup of that node to another node
automatically, how should I do?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite cluster cannot be activated - Failed to restore from a checkpoint

2018-04-30 Thread yonggu.lee
In our configuration, *work/* directory is always deleted when the cluster
restarts because we are using docker & kubernetes for cluster managing and
not set "workDirectory" property to a persistent path. So there is no
*work/db/wal* also, but the error occurs.

And, as a other topic, the *work/* directory should be located in a
persistent path? Our current config, not storing work directory, is wrong?
In other words, the "workDirectory" config property should be set to a
persistent one like,



as well as storagePath, walPath, walArchivePath?

Thanks in advance.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite cluster cannot be activated - Failed to restore from a checkpoint

2018-04-30 Thread Pavel Vinokurov
Hi,

You could remove the folder *work/db/wal* and restart the cluster, but
before backup *work/* directory fully.
The workaround is to not apply last changes from WAL, just load the last
savepoint.

2018-04-30 12:01 GMT+03:00 yonggu.lee :

> Our ignite cluster stuck in an inactive state, cannot be restored from a
> checkpoint.
>
> When cluster is activated, the following exception occurs,
>
> [17:40:54,750][INFO][exchange-worker-#122][GridCacheDatabaseSharedManager]
> Read checkpoint status
> [startMarker=/naver/ignite_storage/20180330/storage/
> node00-698bff11-10c4-4fa9-87bf-07f22714951e/cp/
> 1525070153790-cd46119a-51cd-49af-9ffa-0dccca84fb20-START.bin,
> endMarker=/naver/ignite_storage/20180330/storage/
> node00-698bff11-10c4-4fa9-87bf-07f22714951e/cp/
> 1525070153790-cd46119a-51cd-49af-9ffa-0dccca84fb20-END.bin]
> [17:40:54,750][INFO][exchange-worker-#122][GridCacheDatabaseSharedManager]
> Applying lost cache updates since last checkpoint record
> [lastMarked=FileWALPointer [idx=106922, fileOffset=3457606, len=299101,
> forceFlush=false], lastCheckpointId=cd46119a-51cd-49af-9ffa-0dccca84fb20]
> [17:40:54,818][SEVERE][exchange-worker-#122][
> GridDhtPartitionsExchangeFuture]
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=12,
> minorTopVer=1], discoEvt=DiscoveryCustomEvent
> [customMsg=ChangeGlobalStateMessage
> [id=9a375b51361-acca12ae-d9fb-4e21-a282-3bc7af575257,
> reqId=b3985722-b063-4e5a-831e-9f84d656df96,
> initiatingNodeId=c6e1394e-bf7a-4fe4-a1bf-f64193bd44f4, activate=true],
> affTopVer=AffinityTopologyVersion [topVer=12, minorTopVer=1],
> super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=c6e1394e-bf7a-4fe4-a1bf-f64193bd44f4, addrs=[10.116.24.222,
> 10.244.5.0,
> 127.0.0.1, 172.17.0.1, 192.168.193.192], sockAddrs=[/10.244.5.0:47500,
> /172.17.0.1:47500, /192.168.193.192:47500, /127.0.0.1:47500,
> /10.116.24.222:47500], discPort=47500, order=3, intOrder=3,
> lastExchangeTime=1525077608394, loc=false, ver=2.3.0#20171220-sha1:
> 8431829c,
> isClient=false], topVer=12, nodeId8=e8f4c909, msg=null,
> type=DISCOVERY_CUSTOM_EVT, tstamp=1525077647980]], nodeId=c6e1394e,
> evt=DISCOVERY_CUSTOM_EVT]
> java.lang.IndexOutOfBoundsException: index 890
> at
> java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(
> AtomicReferenceArray.java:78)
> at
> java.util.concurrent.atomic.AtomicReferenceArray.get(
> AtomicReferenceArray.java:125)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridDhtPartitionTopologyImpl.forceCreatePartition(
> GridDhtPartitionTopologyImpl.java:767)
> at
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager.applyUpdate(GridCacheDatabaseSharedManager
> .java:1777)
> at
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager.applyLastUpdates(
> GridCacheDatabaseSharedManager.java:1637)
> at
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager.restoreState(GridCacheDatabaseSharedManager
> .java:1072)
> at
> org.apache.ignite.internal.processors.cache.persistence.
> GridCacheDatabaseSharedManager.beforeExchange(
> GridCacheDatabaseSharedManager.java:863)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.distributedExchange(
> GridDhtPartitionsExchangeFuture.java:1019)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFutur
> e.java:651)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeMana
> ger$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> [17:40:54,818][INFO][exchange-worker-#122][GridDhtPartitionsExchangeFutur
> e]
> Finish exchange future [startVer=AffinityTopologyVersion [topVer=12,
> minorTopVer=1], resVer=null, err=java.lang.IndexOutOfBoundsException:
> index
> 890]
> [17:40:54,830][SEVERE][exchange-worker-#122][
> GridCachePartitionExchangeManager]
> Failed to wait for completion of partition map exchange (preloading will
> not
> start): GridDhtPartitionsExchangeFuture [firstDiscoEvt=
> DiscoveryCustomEvent
> [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=12,
> minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=c6e1394e-bf7a-4fe4-a1bf-f64193bd44f4, addrs=[10.116.24.222,
> 10.244.5.0,
> 127.0.0.1, 172.17.0.1, 192.168.193.192], sockAddrs=[/10.244.5.0:47500,
> /172.17.0.1:47500, /192.168.193.192:47500, /127.0.0.1:47500,
> /10.116.24.222:47500], discPort=47500, order=3, intOrder=3,
> lastExchangeTime=1525077608394, loc=false, ver=2.3.0#20171220-sha1:
> 8431829c,
> 

ignite cluster cannot be activated - Failed to restore from a checkpoint

2018-04-30 Thread yonggu.lee
Our ignite cluster stuck in an inactive state, cannot be restored from a
checkpoint.

When cluster is activated, the following exception occurs,

[17:40:54,750][INFO][exchange-worker-#122][GridCacheDatabaseSharedManager]
Read checkpoint status
[startMarker=/naver/ignite_storage/20180330/storage/node00-698bff11-10c4-4fa9-87bf-07f22714951e/cp/1525070153790-cd46119a-51cd-49af-9ffa-0dccca84fb20-START.bin,
endMarker=/naver/ignite_storage/20180330/storage/node00-698bff11-10c4-4fa9-87bf-07f22714951e/cp/1525070153790-cd46119a-51cd-49af-9ffa-0dccca84fb20-END.bin]
[17:40:54,750][INFO][exchange-worker-#122][GridCacheDatabaseSharedManager]
Applying lost cache updates since last checkpoint record
[lastMarked=FileWALPointer [idx=106922, fileOffset=3457606, len=299101,
forceFlush=false], lastCheckpointId=cd46119a-51cd-49af-9ffa-0dccca84fb20]
[17:40:54,818][SEVERE][exchange-worker-#122][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (preloading will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=12,
minorTopVer=1], discoEvt=DiscoveryCustomEvent
[customMsg=ChangeGlobalStateMessage
[id=9a375b51361-acca12ae-d9fb-4e21-a282-3bc7af575257,
reqId=b3985722-b063-4e5a-831e-9f84d656df96,
initiatingNodeId=c6e1394e-bf7a-4fe4-a1bf-f64193bd44f4, activate=true],
affTopVer=AffinityTopologyVersion [topVer=12, minorTopVer=1],
super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=c6e1394e-bf7a-4fe4-a1bf-f64193bd44f4, addrs=[10.116.24.222, 10.244.5.0,
127.0.0.1, 172.17.0.1, 192.168.193.192], sockAddrs=[/10.244.5.0:47500,
/172.17.0.1:47500, /192.168.193.192:47500, /127.0.0.1:47500,
/10.116.24.222:47500], discPort=47500, order=3, intOrder=3,
lastExchangeTime=1525077608394, loc=false, ver=2.3.0#20171220-sha1:8431829c,
isClient=false], topVer=12, nodeId8=e8f4c909, msg=null,
type=DISCOVERY_CUSTOM_EVT, tstamp=1525077647980]], nodeId=c6e1394e,
evt=DISCOVERY_CUSTOM_EVT]
java.lang.IndexOutOfBoundsException: index 890
at
java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
at
java.util.concurrent.atomic.AtomicReferenceArray.get(AtomicReferenceArray.java:125)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.forceCreatePartition(GridDhtPartitionTopologyImpl.java:767)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyUpdate(GridCacheDatabaseSharedManager.java:1777)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyLastUpdates(GridCacheDatabaseSharedManager.java:1637)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreState(GridCacheDatabaseSharedManager.java:1072)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.beforeExchange(GridCacheDatabaseSharedManager.java:863)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1019)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:651)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
[17:40:54,818][INFO][exchange-worker-#122][GridDhtPartitionsExchangeFuture]
Finish exchange future [startVer=AffinityTopologyVersion [topVer=12,
minorTopVer=1], resVer=null, err=java.lang.IndexOutOfBoundsException: index
890]
[17:40:54,830][SEVERE][exchange-worker-#122][GridCachePartitionExchangeManager]
Failed to wait for completion of partition map exchange (preloading will not
start): GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryCustomEvent
[customMsg=null, affTopVer=AffinityTopologyVersion [topVer=12,
minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=c6e1394e-bf7a-4fe4-a1bf-f64193bd44f4, addrs=[10.116.24.222, 10.244.5.0,
127.0.0.1, 172.17.0.1, 192.168.193.192], sockAddrs=[/10.244.5.0:47500,
/172.17.0.1:47500, /192.168.193.192:47500, /127.0.0.1:47500,
/10.116.24.222:47500], discPort=47500, order=3, intOrder=3,
lastExchangeTime=1525077608394, loc=false, ver=2.3.0#20171220-sha1:8431829c,
isClient=false], topVer=12, nodeId8=e8f4c909, msg=null,
type=DISCOVERY_CUSTOM_EVT, tstamp=1525077647980]], crd=TcpDiscoveryNode
[id=8e65440a-df65-4770-9a7b-26672bd574a3, addrs=[10.116.25.32, 10.244.6.0,
127.0.0.1, 172.17.0.1, 192.168.82.128], sockAddrs=[/10.244.6.0:47500,
/10.116.25.32:47500, /172.17.0.1:47500, /192.168.82.128:47500,
/127.0.0.1:47500], discPort=47500, order=1, intOrder=1,
lastExchangeTime=1525077608394, loc=false, ver=2.3.0#20171220-sha1:8431829c,
isClient=false], 

Re: Intermittent Spikes in Response Time

2018-04-30 Thread ezhuravlev
Hi Chris,

How do you map compute tasks to nodes?

Well, it's possible that 2 nodes in your cluster always stores more data
than others, that's why you face these spikes, it could happen in case of by
your affinity key, too much data could be collocated. You can check it by
using IgniteCache.localSize on each node and comparing results.

Also, I would recommend checking Ignite.affinity.mapPartitionsToNodes - just
to make sure that all nodes store pretty the same amount of partitions.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/