Re: Persistent Store Not enabled in Ignite Yarn Deployment
Hello! Unfortunately it's hard to tell why the node would stop without looking at client & server logs. Can you share these somewhere? Maybe you should also set memory policy for these nodes, to the values that your Yarn configuration expect them to have: https://apacheignite.readme.io/docs/memory-configuration Regards, -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Persistent Store Not enabled in Ignite Yarn Deployment
Thanks for your time !!! 1) Changed the logic of generating keys. Now added a query to get the maximum key from cache. New record will be max key + 1. This has resolved the issue of count mismatch. Thank you 2) After splitting data into more batches and writing them to grid in many iterations, got below Exception in the client [Initial few iterations were successful]. 18/01/31 10:13:28 ERROR com.project: Error :class org.apache.ignite.internal.NodeStoppingException: Operation has been cancelled (node is stopping). javax.cache.CacheException: class org.apache.ignite.internal.NodeStoppingException: Operation has been cancelled (node is stopping). 18/01/31 10:13:28 ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1287) 18/01/31 10:13:28 ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1648) 18/01/31 10:13:28 ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1071) 18/01/31 10:13:28 ERROR com.project$$anonfun$startWritingToGrid$1: org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:928) Guess the node is getting stopped when memory is full. Thanks in Advance!!! -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Persistent Store Not enabled in Ignite Yarn Deployment
Hello! I don't think that anything will get evicted with the configuration that you have provided. I think you should check whether keys are really unique (yes I remember that you include currentTimeMillis in them, still it makes sense to double-check) and also that all values are of type Data. If some of them are not of type Data, SQL will not see them. Can you split your data into more batches (e.g. 10 batches, 20k records each), provide counts after every batch is ingested? Regards, -- Ilya Kasnacheev 2018-01-30 17:28 GMT+03:00 Raghav: > Hi, > > I would like to add below points : > 1) Ignite YARN is started once [Server] and it will not be stopped between > iterations. This means that only once the Ignite nodes are negotiated > between YARN and Ignite. Once finalized this should be the same. > > Please find below the server logs. > [12:30:46] Topology snapshot [ver=1, servers=1, clients=0, CPUs=48, > heap=9.9GB] > [12:30:46] Topology snapshot [ver=2, servers=2, clients=0, CPUs=96, > heap=20.0GB] > [12:30:47] Topology snapshot [ver=3, servers=3, clients=0, CPUs=144, > heap=30.0GB] > [12:30:47] Topology snapshot [ver=4, servers=4, clients=0, CPUs=192, > heap=39.0GB] > [12:30:47] Topology snapshot [ver=5, servers=5, clients=0, CPUs=240, > heap=49.0GB] > [12:30:47] Topology snapshot [ver=6, servers=6, clients=0, CPUs=240, > heap=59.0GB] > [12:30:47] Topology snapshot [ver=7, servers=7, clients=0, CPUs=240, > heap=69.0GB] > [12:30:48] Topology snapshot [ver=8, servers=8, clients=0, CPUs=240, > heap=79.0GB] > [12:30:48] Topology snapshot [ver=9, servers=9, clients=0, CPUs=240, > heap=89.0GB] > [12:30:48] Topology snapshot [ver=10, servers=10, clients=0, CPUs=240, > heap=99.0GB] > [12:50:26] Topology snapshot [ver=11, servers=10, clients=1, CPUs=240, > heap=120.0GB] > [12:54:18] Topology snapshot [ver=12, servers=10, clients=0, CPUs=240, > heap=99.0GB] > [12:56:07] Topology snapshot [ver=13, servers=10, clients=1, CPUs=240, > heap=120.0GB] > [13:00:49] Topology snapshot [ver=14, servers=10, clients=0, CPUs=240, > heap=99.0GB] > [13:06:28] Topology snapshot [ver=15, servers=10, clients=1, CPUs=240, > heap=120.0GB] > [13:07:17] Topology snapshot [ver=16, servers=10, clients=0, CPUs=240, > heap=99.0GB] > > > 2) Only ignite clients are started and stopped in different Iterations. As > we could see the client count becomes 0 and 1 whereas the servers count > remain the same as 10 > > 3) /tmp is a HDFS path where we have configured for Ignite and provided in > cluster.properties. We could change this to any path. > > It would be helpful if there is a way to enable persistence in YARN > deployment. > > Thank you. > > Best Regards, > Raghav > > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Persistent Store Not enabled in Ignite Yarn Deployment
Hi, I would like to add below points : 1) Ignite YARN is started once [Server] and it will not be stopped between iterations. This means that only once the Ignite nodes are negotiated between YARN and Ignite. Once finalized this should be the same. Please find below the server logs. [12:30:46] Topology snapshot [ver=1, servers=1, clients=0, CPUs=48, heap=9.9GB] [12:30:46] Topology snapshot [ver=2, servers=2, clients=0, CPUs=96, heap=20.0GB] [12:30:47] Topology snapshot [ver=3, servers=3, clients=0, CPUs=144, heap=30.0GB] [12:30:47] Topology snapshot [ver=4, servers=4, clients=0, CPUs=192, heap=39.0GB] [12:30:47] Topology snapshot [ver=5, servers=5, clients=0, CPUs=240, heap=49.0GB] [12:30:47] Topology snapshot [ver=6, servers=6, clients=0, CPUs=240, heap=59.0GB] [12:30:47] Topology snapshot [ver=7, servers=7, clients=0, CPUs=240, heap=69.0GB] [12:30:48] Topology snapshot [ver=8, servers=8, clients=0, CPUs=240, heap=79.0GB] [12:30:48] Topology snapshot [ver=9, servers=9, clients=0, CPUs=240, heap=89.0GB] [12:30:48] Topology snapshot [ver=10, servers=10, clients=0, CPUs=240, heap=99.0GB] [12:50:26] Topology snapshot [ver=11, servers=10, clients=1, CPUs=240, heap=120.0GB] [12:54:18] Topology snapshot [ver=12, servers=10, clients=0, CPUs=240, heap=99.0GB] [12:56:07] Topology snapshot [ver=13, servers=10, clients=1, CPUs=240, heap=120.0GB] [13:00:49] Topology snapshot [ver=14, servers=10, clients=0, CPUs=240, heap=99.0GB] [13:06:28] Topology snapshot [ver=15, servers=10, clients=1, CPUs=240, heap=120.0GB] [13:07:17] Topology snapshot [ver=16, servers=10, clients=0, CPUs=240, heap=99.0GB] 2) Only ignite clients are started and stopped in different Iterations. As we could see the client count becomes 0 and 1 whereas the servers count remain the same as 10 3) /tmp is a HDFS path where we have configured for Ignite and provided in cluster.properties. We could change this to any path. It would be helpful if there is a way to enable persistence in YARN deployment. Thank you. Best Regards, Raghav -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Persistent Store Not enabled in Ignite Yarn Deployment
I can see two options here: - Between iteration 1 and iteration 2 some nodes were stopped. Perhaps some new nodes were started. Data on stopped nodes became unavailab.e - Cache key collisions between iterations 1 and 2 so that 80% keys are identical and only 20% are distinct the second time. I expect it is the former. When you ask YARN to run 10 Ignite nodes, I guess it will start them on random machines, and not on the same ones every time. This will lead to different set of machines next time and lost data. I don't think you should be using persistence with YARN. In fact, /tmp in paths should give you the hint that you should not depend on availability of data between runs. Regards, -- Ilya Kasnacheev 2018-01-30 16:21 GMT+03:00 Raghav: > Hi, > > 1) Load data to cache > > var cacheConf: CacheConfiguration[Long, Data] = new > CacheConfiguration[Long, Data]("DataCache") > cacheConf.setCacheMode(CacheMode.PARTITIONED) > cacheConf.setIndexedTypes(classOf[Long], classOf[Data]) > val cache = ignite.getOrCreateCache(cacheConf) > var dataMap = getDataMap() > cache.*putAll*(dataMap) > > There is no possibility of having duplicate keys as currentTimeInMillis > along with loop count is included. > > 2) Count Logic: > > val sql1 = "select * from DataCache" > val count = cache.*query*(new SqlFieldsQuery(sql1)).getAll.size() > > Used query instead of metrics. > > 3) Errors: No error in server as well as client logs. > > Also, checked for folder creation in ignite work > directory[IGNITE_WORKING_DIR] as per > https://cwiki.apache.org/confluence/display/IGNITE/ > Ignite+Persistent+Store+-+under+the+hood. > But no folders created for persistence. > > Thank you. > > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Persistent Store Not enabled in Ignite Yarn Deployment
Hi, 1) Load data to cache var cacheConf: CacheConfiguration[Long, Data] = new CacheConfiguration[Long, Data]("DataCache") cacheConf.setCacheMode(CacheMode.PARTITIONED) cacheConf.setIndexedTypes(classOf[Long], classOf[Data]) val cache = ignite.getOrCreateCache(cacheConf) var dataMap = getDataMap() cache.*putAll*(dataMap) There is no possibility of having duplicate keys as currentTimeInMillis along with loop count is included. 2) Count Logic: val sql1 = "select * from DataCache" val count = cache.*query*(new SqlFieldsQuery(sql1)).getAll.size() Used query instead of metrics. 3) Errors: No error in server as well as client logs. Also, checked for folder creation in ignite work directory[IGNITE_WORKING_DIR] as per https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood. But no folders created for persistence. Thank you. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Persistent Store Not enabled in Ignite Yarn Deployment
Hi, 1. How do you load data to cache? Is it possible keys have duplicates? 2. How did you check there are 120k records in cache? Is it whole cache metric or node local metric? 3. Are there any error in logs? On Tue, Jan 30, 2018 at 3:17 PM, Raghavwrote: > Hello, > > Am trying to enable Ignite Native Persistence in Ignite Yarn Deployment. > Purpose of this is to have no eviction of data at all from Ignite grids. > Whenever the memory is full the data should get stored in disc. > > But when I try to add large number of records to Ignite Grid, the data is > getting evicted. > > Example : > In Iteration 1 added 10 records. Expected and actual count of records > is > 10. > In Iteration 2 added another 10 records. But instead of expected 20 > records, there were only around 12 records. Guess the remaining got > evicted from grid and there is data loss. > > Kindly guide me to enable persistence without data eviction so that there > is > no data loss. > > Please find below the details. > > Ignite Version : 2.3.0 > > Cluster details for Yarn Deployment: > > IGNITE_NODE_COUNT=10 > IGNITE_RUN_CPU_PER_NODE=5 > IGNITE_MEMORY_PER_NODE=10096 > IGNITE_VERSION=2.3.0 > IGNITE_PATH=/tmp/ignite/2.3.0/apache-ignite-fabric-2.3.0-bin.zip > IGNITE_RELEASES_DIR=/tmp/ignite/2.3.0/releases > IGNITE_WORKING_DIR=/tmp/ignite/2.3.0/work > IGNITE_XML_CONFIG=/tmp/ignite/2.3.0/config/ignite-config.xml > IGNITE_USERS_LIBS=/tmp/ignite/2.3.0/libs > IGNITE_LOCAL_WORK_DIR=/local/home/ignite/2.3.0 > > Ignite Configuration for Yarn deployment: > > > http://www.springframework.org/schema/beans; >xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; >xmlns:util="http://www.springframework.org/schema/util; >xsi:schemaLocation=" > http://www.springframework.org/schema/beans > http://www.springframework.org/schema/beans/spring-beans-2.5.xsd > http://www.springframework.org/schema/util > http://www.springframework.org/schema/util/spring-util-2.0.xsd;> > > > > class="org.apache.ignite.configuration.DataStorageConfiguration"> > > class="org.apache.ignite.configuration.DataRegionConfiguration"> > > > > > > > > > > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm. > TcpDiscoveryVmIpFinder"> > > > :47500 > > > > > value="1000"/> > value="1000"/> > value="1000"/> > value="50"/> > value="1000"/> > > > > > > Thanks in Advance !!! > > > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ > -- Best regards, Andrey V. Mashenkov