New setup :

Node configuration :

  <bean class="org.apache.ignite.configuration.IgniteConfiguration">

    <property name="peerClassLoadingEnabled" value="true"/>
    <property name="deploymentMode" value="CONTINUOUS"/>
    <property name="dataStorageConfiguration">
      <bean
class="org.apache.ignite.configuration.DataStorageConfiguration">
        <property name="defaultDataRegionConfiguration">
          <bean
class="org.apache.ignite.configuration.DataRegionConfiguration">
            <property name="persistenceEnabled" value="true"/>
          </bean>
        </property>
      </bean>

    </property>
  </bean>
</beans>


Started 3 different nodes with this config :
./ignite.sh ../config/custom-config.xml


Client : It adds data once and queries the cache


cfg.setClientMode(true);
try (Ignite ignite = Ignition.start(cfg)) {
    ignite.cluster().state(ClusterState.ACTIVE);
    CacheConfiguration<Integer, Department> deptCacheConfig = new
CacheConfiguration<>(DEPT_CACHE);
    deptCacheConfig.setCacheMode(CacheMode.PARTITIONED);
    deptCacheConfig.setBackups(1);
    IgniteCache<Integer, Department> deptCache =
ignite.getOrCreateCache(deptCacheConfig);


    Department d1 = new Department(1, "CS");
    Department d2 = new Department(2, "ECE");
    Department d3 = new Department(3, "CIVIL");


    if(deptCache.size(CachePeekMode.ALL) == 0){
        System.out.println("Adding dept data to cache");
        deptCache.put(d1.getDeptId(), d1);
        deptCache.put(d2.getDeptId(), d2);
        deptCache.put(d3.getDeptId(), d3);
    }

    List<Cache.Entry<Object, Object>> depts = deptCache.query(new
ScanQuery<>()).getAll();
    depts.stream().forEach(entry -> System.out.println(entry.getValue()));
}


After inserting data, this is how cache data looks like on nodes :
[image: image.png]

Test :

1. first time it prints data
2. Kill node 3, query still works. I believe because node 1 and 2 are
primary nodes for 3 records we inserted.
3. Restart node3, kill node 2. query gets below error.

Caused by: class
org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
Failed to execute query because cache partition has been lostParts
[cacheName=deptCache, part=513]

Base line log :
[17:40:44] Topology snapshot [ver=11, locNode=340bad33, servers=2,
clients=1, state=ACTIVE, CPUs=8, offheap=19.0GB, heap=24.0GB]
[17:40:44]   ^-- Baseline [id=0, size=3, online=2, offline=1]

Does it mean for query to run, baseline must have all nodes up ?

On Mon, Jan 31, 2022 at 4:46 PM Surinder Mehra <redni...@gmail.com> wrote:

> Hi, thanks for pointing out the problem of why data is being stored on the
> same machine.
> The 2nd point:
> As I mentioned in steps, we are getting the same exception even with
> backups.
> Could you please point out other problems with configuration as you
> mentioned
>
>
> On Mon, Jan 31, 2022 at 4:35 PM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> There’s a lot to unpack here, but there are (at least) two problems with
>> your configuration.
>>
>> First, you should not be activating your cluster in your startup script.
>> That’s an operation that needs to be completed only once, not every time
>> the cluster (or node) starts. This is probably why all the data is being
>> stored on a single machine.
>>
>> Second, you create a cache but without any backups. That means that when
>> a node dies, you lose access to all the data.
>>
>> > On 31 Jan 2022, at 10:57, Surinder Mehra <redni...@gmail.com> wrote:
>> >
>> >
>> > Hi Guys,
>> > I observed below behavior with persistence enabled. Could you please
>> check and confirm if it's a bug or I missed some configuration.
>> >
>> > Steps:
>> > 1. Start ignite node1 and 2 with persistence enabled
>> > 2. Step 1 will write few entries to ignite cache
>> > 3. All entries go to one node
>> > 4. Stop node containing the data
>> > 5. Run a query to get data, below exception will be thrown. I guess
>> this is expected because the node containing the data is down
>> > 6. Restart killed node, exception shouldn’t be thrown because data is
>> back in caches
>> > 7. Run a query to get data, below exception will be thrown again. This
>> looks weird.
>> > 8. Kill node2 and try to query again, still the same exception
>> > 9. Restart only the left node(data owning node). After restart, the
>> query starts working again.
>> > 10. I checked the cache directory under node1, it didn't have part0.bin
>> ever. files start from part1 and so on.
>> > 11. From the exception, it looks like it is trying to read non existent
>> file
>> > 12. Tried setting consistent Id for each node, still same exception.
>> >
>> >
>> > Another observation : When backup is enabled, shouldn't the backup node
>> serve the query. Why it throws same below exception when node1(primary) is
>> down
>> >
>> > This exception is only thrown if data containing node goes down
>> >
>> > Caused by: class
>> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
>> Failed to execute query because cache partition has been lostParts
>> [cacheName=deptCache, part=0]
>> >
>> > Node2{
>> >
>> > DataStorageConfiguration storageCfg = new DataStorageConfiguration();
>> >
>> storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
>> > IgniteConfiguration cfg = new IgniteConfiguration();
>> > cfg.setDataStorageConfiguration(storageCfg);
>> >
>> > try (Ignite ignite = Ignition.start(cfg)) {
>> >             ignite.cluster().state(ClusterState.ACTIVE);
>> >             CacheConfiguration<Integer, Department> deptCacheConfig =
>> new CacheConfiguration<>(DEPT_CACHE);
>> >             deptCacheConfig.setCacheMode(CacheMode.PARTITIONED);
>> > IgniteCache<Integer, Department> deptCache =
>> ignite.getOrCreateCache(deptCacheConfig);
>> >
>> > Department d1 = new Department(1, "CS");
>> > Department d2 = new Department(2, "ECE");
>> > Department d3 = new Department(3, "CIVIL");
>> >
>> > if(deptCache.size(CachePeekMode.ALL) == 0){
>> > System.out.println("Adding dept data to cache");
>> > deptCache.put(d1.getDeptId(), d1);
>> > deptCache.put(d2.getDeptId(), d2);
>> > deptCache.put(d3.getDeptId(), d3);
>> > }
>> >
>> > }
>> >
>> > }
>> >
>> > Node1{
>> >
>> > DataStorageConfiguration storageCfg = new DataStorageConfiguration();
>> >
>> storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
>> > IgniteConfiguration cfg = new IgniteConfiguration();
>> > cfg.setDataStorageConfiguration(storageCfg);
>> >
>> > try (Ignite ignite = Ignition.start(cfg)) {
>> >             ignite.cluster().state(ClusterState.ACTIVE);
>> >             CacheConfiguration<Integer, Department> deptCacheConfig =
>> new CacheConfiguration<>(DEPT_CACHE);
>> >             deptCacheConfig.setCacheMode(CacheMode.PARTITIONED);
>> > IgniteCache<Integer, Department> deptCache =
>> ignite.getOrCreateCache(deptCacheConfig);
>> >
>> > Department d1 = new Department(1, "CS");
>> > Department d2 = new Department(2, "ECE");
>> > Department d3 = new Department(3, "CIVIL");
>> >
>> > if(deptCache.size(CachePeekMode.ALL) == 0){
>> > System.out.println("Adding dept data to cache");
>> > deptCache.put(d1.getDeptId(), d1);
>> > deptCache.put(d2.getDeptId(), d2);
>> > deptCache.put(d3.getDeptId(), d3);
>> > }
>> >
>> > }
>> > }
>> >
>> > Client{
>> >
>> > IgniteConfiguration cfg = new IgniteConfiguration();
>> > cfg.setDataStorageConfiguration(storageCfg);
>> >
>> > try (Ignite ignite = Ignition.start(cfg)) {
>> >         IgniteCache<Integer, Department> deptCache =
>> ignite.getOrCreateCache(DEPT_CACHE);
>> >         List<Cache.Entry<Object, Object>> depts = deptCache.query(new
>> ScanQuery<>()).getAll();
>> >         depts.stream().forEach(entry ->
>> System.out.println(entry.getValue()));
>> >     }
>> > }
>>
>>

Reply via email to