Re: Correct build process for Ignite

2018-03-09 Thread sanjaykud...@gmail.com
i am also getting the below error

Downloading from central:
https://repo.maven.apache.org/maven2/classworlds/classworlds/1.1/classworlds-1.1.jar
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin
org.apache.felix:maven-bundle-plugin:2.5.4 or one of its dependencies could
not be resolved: The following artifacts could not be resolved:
org.apache.maven:maven-core:jar:2.0.7,
org.apache.maven:maven-settings:jar:2.0.7,
org.apache.maven:maven-plugin-parameter-documenter:jar:2.0.7,
org.apache.maven:maven-profile:jar:2.0.7,
org.apache.maven:maven-model:jar:2.0.7,
org.apache.maven:maven-artifact:jar:2.0.7,
org.codehaus.plexus:plexus-container-default:jar:1.0-alpha-9-stable-1,
org.apache.maven:maven-



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Correct build process for Ignite

2018-03-09 Thread sanjaykud...@gmail.com
i am also getting the below error




Downloading from central:
https://repo.maven.apache.org/maven2/classworlds/classworlds/1.1/classworlds-1.1.jar
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] Unresolveable build extension: Plugin
org.apache.felix:maven-bundle-plugin:2.5.4 or one of its dependencies could
not be resolved: The following artifacts could not be resolved:
org.apache.maven:maven-core:jar:2.0.7,
org.apache.maven:maven-settings:jar:2.0.7,
org.apache.maven:maven-plugin-parameter-documenter:jar:2.0.7,
org.apache.maven:maven-profile:jar:2.0.7,
org.apache.maven:maven-model:jar:2.0.7,
org.apache.maven:maven-artifact:jar:2.0.7,
org.codehaus.plexus:plexus-container-default:jar:1.0-alpha-9-stable-1,
org.apache.maven:maven-



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Does Merge statement (DML) work with JDBC client driver

2018-03-09 Thread vkulichenko
Naveen,

Ignite provides out of the box implementation for RDBMS. The easiest way to
integration would be to use Web Console to generate all required POJO
classes and configurations:
https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Usage of DataStreamer for bulk loading

2018-03-09 Thread Gaurav Bajaj
Hi Naveen,

I had similar situation. Two things you can do :
1. Decouple file reading from cache streaming, so that both can be handled
in separate threads asynchronously.

2. Once you have data from csv in collection, use use parallelStreams to
add data in streamer with multiple threads.

Thanks,
Gaurav

On 09-Mar-2018 3:05 PM, "Naveen"  wrote:

> Hi DH
>
> I am not using any custom streamReciever, my requirement is very simple.
> Have huge data in CSV, reading line by line and parsing the line and
> populating the POJO and using the DataStreamer to load data into cache.
>
> while (sc.hasNextLine()) {
> ct++;
> String line = sc.nextLine();
> String[] tokens = line.split(Constants.Delimter,
> -1);
> aASSOCIATED_PARTIES = new ASSOCIATED_PARTIES();
>
> aASSOCIATED_PARTIES.setASSOCIATED_PARTY_ID(tokens[
> 1]);
> aASSOCIATED_PARTIES.setUPDATEDBY(tokens[3]);
>
>
> streamer.perNodeBufferSize(Constants.perNodeBufferSize);
>
> streamer.perNodeParallelOperations(Constants.perNodeParallelOperations);
>
> streamer.addData(tokens[0], aASSOCIATED_PARTIES);
>}
>
> As you mentioned, I have made sure that one DataStreamer for each client,
> after this change, it stopped failing.
> Any ways we can improve this performance.
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Correct build process for Ignite

2018-03-09 Thread Andrey Kornev
I suspect this happens because you have a repository mirror defined in your 
.m2/settings.xml that matches all repos. For example:

  

  my-mirror
  my-repo
  http://acme.com/my-repo
  *

  



From: vkulichenko 
Sent: Thursday, March 8, 2018 5:08 PM
To: user@ignite.apache.org
Subject: Re: Correct build process for Ignite

Build works for me (and most likely for everyone else as there are no
complaints), so it looks like your local issue. I would try the following:
- Clean up local Maven repo.
- Run without custom setting.xml
- Run with verbose output to see if Maven provides more details on the
issue.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: eviction performance

2018-03-09 Thread Scott Feldstein
Hi Stanislav,
Thanks for info and the note on the terminology.

So in my setup im partitioning by chunks of time.  If I turn eagerttl to false 
how would the cleanup look? Would I periodically scan the partitions that i 
want to be expired? Are there any best practices to that end?

Additionally are there any truncate-like commands I can use to achieve a 
lightweight cleanup of my data?

Thanks,
Scott

> On Mar 9, 2018, at 1:41 AM, Stanislav Lukyanov  wrote:
> 
> Hi,
>  
> A terminology nitpicking: it seems that you’re talking about expiry; eviction 
> is a similar mechanism but is based on the data size, not time.
>  
> Have you tried setting CacheConfiguration.eagerTtl = false 
> (https://apacheignite.readme.io/v2.3/docs/expiry-policies#section-eager-ttl)?
> With that setting the expired entries will be removed on the next access 
> instead of doing that in the background.
> If you add entries in large batches but access them more uniformly over time, 
> it could redistribute the cost of expiring the data more evenly.
>  
> Thanks,
> Stan
>  
> From: scottmf
> Sent: 9 марта 2018 г. 6:51
> To: user@ignite.apache.org
> Subject: eviction performance
>  
> Hi,
> I am prototyping using ignite to ingest lots of short lived events in a
> system.  In order to ingest the data i'm using the kafka streamer mechanism.
> I'm pushing 20k events / second into the system into a partitioned off heap
> cache.  I've run into a few performance issues but I've been able to wiggle
> my way out of it using information from the forums and ignite docs.  But the
> latest issue I can't seem to find any information about.
>  
> My cache is setup to evict the data after one hour.  The insertion
> performance degrades substantially when the eviction mechanism kicks in
> where the latency of the insertion continues to degrade over time.
>  
> WRT to the OS, memory and disk none of them are saturated.  The load avg is
> low, the gc is not under pressure and the disk is not fully utilized.
>  
> I've experimented around with different eviction times to make sure that it
> is indeed the eviction that is causing this.  After some idle time I startup
> our event simulator and wait for the evictions to start and sure enough the
> latency of insertions increases immediately when evictions start.
>  
> Each node is 32 GB / 8 cpus.
>  
> Here are my jvm opts: "-Xmx12g -Xms12g -XX:MaxDirectMemorySize=10g
> -XX:PermSize=192m -XX:MaxPermSize=192m -XX:+UseG1GC -XX:ConcGCThreads=12
> -XX:ParallelGCThreads=22 -XX:MaxGCPauseMillis=1000 -XX:G1HeapWastePercent=2
> -XX:G1ReservePercent=15 -XX:+UnlockExperimentalVMOptions
> -XX:G1OldCSetRegionThresholdPercent=15 -XX:G1MixedGCLiveThresholdPercent=90
> -XX:G1MaxNewSizePercent=25 -Dcom.sun.management.jmxremote
> -XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError"
>  
> I was wondering if there is no tuning I can do to avoid this overhead, I am
> partitioning my data by time - is there a performant way to clear a
> partition in a cache one at a time?  (i was thinking of something similar to
> a truncate command in a rdbms)
>  
> Here is my schema:
> CachePojo {
> @QuerySqlField(index=false)
> private String buffer0;
> @QuerySqlField(index=true)
> private String buffer1;
> @QuerySqlField
> private String buffer2;
> @QuerySqlField(index=true, inlineSize=8, descending=true)
> private Timestamp timestamp;
> @QuerySqlField(index=false)
> private Map fields = new HashMap<>();
> }
>  
> Ignite config:
> IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> igniteConfiguration.setIgniteHome(igniteHome);
> // this is probably not needed
> igniteConfiguration.setTransactionConfiguration(new
> TransactionConfiguration());
> igniteConfiguration.setPeerClassLoadingEnabled(true);
> igniteConfiguration.setIncludeEventTypes(new int[0]);
> igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi());
> igniteConfiguration.setDataStreamerThreadPoolSize(8);
> igniteConfiguration.setPublicThreadPoolSize(16);
> igniteConfiguration.setSystemThreadPoolSize(16);
> DataStorageConfiguration dsc = dataStorageConfiguration();
> igniteConfiguration.setDataStorageConfiguration(dsc);
>  
> dataStorageConfig:
> DataStorageConfiguration dataStorageConfiguration = new
> DataStorageConfiguration();
> DataRegionConfiguration dataRegionConfiguration =
> dataRegionConfiguration();
>   
> dataStorageConfiguration.setDefaultDataRegionConfiguration(dataRegionConfiguration);
>   
> dataStorageConfiguration.setDataRegionConfigurations(largeDataRegionConfiguration());
> dataStorageConfiguration.setPageSize(8192);
> dataStorageConfiguration.setMetricsEnabled(true);
> dataStorageConfiguration.setWriteThrottlingEnabled(true);
>   
> dataStorageConfiguration.setStoragePath("/var/lib/ignite/persistence");
>  

RE: Does Merge statement (DML) work with JDBC client driver

2018-03-09 Thread Naveen
Hi Stan

I do not want to Oracle with native persistence, I only wanted to use Oracle
persistent layer. 

Are you sure, we need to implement cacheStore for each table we have in the
cluster ?

If that is the case, we need to have separate code base for Oracle as
persistence layer and another version of code base for native persistence?

At the moment, since I am using native persistent, I just created tables
thru JDBC and doing all the writes and reads also thru JDBC, so I have not
developed any POJO for any of the tables.

Is my understanding correct ?

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: How to configure a cluster as a persistent, replicated SQLdatabase

2018-03-09 Thread Stanislav Lukyanov
Hi Naveen,

Please refer to this page https://apacheignite.readme.io/docs/3rd-party-store.
In short, you need to implement a CacheStore or use one of the standard 
implementations (CacheJdbcBlobStore, CacheJdbcPojoStore)
and add it to your cache configuration.
Also, you can set DataRegionConfiguration.persistenceEnabled=false to disable 
native persistence.

Thanks,
Stan

From: Naveen
Sent: 9 марта 2018 г. 17:09
To: user@ignite.apache.org
Subject: Re: How to configure a cluster as a persistent, replicated SQLdatabase

Hi Jose

I was asking how can I configure Oracle DB as persistent layer.
At the moment I am using the Ignite native persistence as persistent layer,
but I would like to use Oracle DB as persistent layer. 
How can I do this, what changes I should do to the config file.
My config file looks like this for native ignite persistence, what do I need
to add for using Oracle as persistent layer

  

  

  

  

  

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: How to configure a cluster as a persistent, replicated SQL database

2018-03-09 Thread Naveen
Hi Jose

I was asking how can I configure Oracle DB as persistent layer.
At the moment I am using the Ignite native persistence as persistent layer,
but I would like to use Oracle DB as persistent layer. 
How can I do this, what changes I should do to the config file.
My config file looks like this for native ignite persistence, what do I need
to add for using Oracle as persistent layer

  

  

  

  

  

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Usage of DataStreamer for bulk loading

2018-03-09 Thread Naveen
Hi DH

I am not using any custom streamReciever, my requirement is very simple.
Have huge data in CSV, reading line by line and parsing the line and
populating the POJO and using the DataStreamer to load data into cache.

while (sc.hasNextLine()) {
ct++;
String line = sc.nextLine();
String[] tokens = line.split(Constants.Delimter, -1);
aASSOCIATED_PARTIES = new ASSOCIATED_PARTIES();

aASSOCIATED_PARTIES.setASSOCIATED_PARTY_ID(tokens[1]);
aASSOCIATED_PARTIES.setUPDATEDBY(tokens[3]);

   
streamer.perNodeBufferSize(Constants.perNodeBufferSize);

streamer.perNodeParallelOperations(Constants.perNodeParallelOperations);

streamer.addData(tokens[0], aASSOCIATED_PARTIES);
   }

As you mentioned, I have made sure that one DataStreamer for each client,
after this change, it stopped failing. 
Any ways we can improve this performance.

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite reads are slower with getAll

2018-03-09 Thread KR Kumar
Hi guys - I have a ignite cluster with persistence enabled that has 200
million events in it. Right now read throughput is around 3000 events per
second. I have increased the IOPS to 1 and even then I have the same
performance. Am I doing something really wrong or this is how it perform
with large amounts of data.

I am using getAll with a batch of 300 keys in one read. The cache is
basically a string key and json message, so its String,String type of cache. 

Any help/pointers ??

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Data Recovery after cluster restart

2018-03-09 Thread Stanislav Lukyanov
Hi Naveen,

The native persistence doesn’t require you to reload data to memory, the tables 
should be available after the activation.

How do you start the nodes? Do you use ignite.sh/ignite.bat?
Have you downloaded Ignite as .zip archive or via Maven?
It’s possible that your persistence files are created in the /tmp which would 
lead to them being cleared on system restart.
Try setting IgniteConfiguration.workDirectory property to some fixed path and 
check that the files are created there (in /db).

Thanks,
Stan

From: Naveen
Sent: 9 марта 2018 г. 9:16
To: user@ignite.apache.org
Subject: Data Recovery after cluster restart

Hi 
I am using Ignite 2.3

I have enables persistence, using Ignite native persistence.

Here is my config file.
Have created a table thru SQL with the below script

CREATE TABLE ASSOCIATED_PARTIES_NEW(PARTY_ID VARCHAR, ASSOCIATED_PARTY_ID
VARCHAR, WALLETID VARCHAR, UPDATEDDATETIME TIMESTAMP, UPDATEDBY VARCHAR,
PRIMARY KEY (PARTY_ID))WITH
"template=partitioned,backups=1,cache_name=ASSOCIATED_PARTIES_NEW,value_type=com.ril.edif.model.ASSOCIATED_PARTIES";
 

Inserted data into thru DataStreamer API and some thru SQL,  I have loaded
1M+ records, you can see the below count

--
0: jdbc:ignite:thin://127.0.0.1> select count(*) from ASSOCIATED_PARTIES;
++
|COUNT(*)|
++
| 1043186|
++
1 row selected (1.227 seconds)
0: jdbc:ignite:thin://127.0.0.1>
--

However, when I restart the cluster and after activation, when I execute
"!tables" thru SQL, I dont see any tables. I could "work" folder having some
huge files. 

How do we get the data back and whats the procedure, do I need to create the
tables again OR do we have any commands to load the data back to memory from
disk.


--




http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
  
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
  
http://www.springframework.org/schema/util/spring-util.xsd;>




  

  

  

  

  










  10.144.114.113:47500..47502
 10.144.114.114:47500..47502
 10.144.114.115:47500..47502








-

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: eviction performance

2018-03-09 Thread Stanislav Lukyanov
Hi,

A terminology nitpicking: it seems that you’re talking about expiry; eviction 
is a similar mechanism but is based on the data size, not time.

Have you tried setting CacheConfiguration.eagerTtl = false 
(https://apacheignite.readme.io/v2.3/docs/expiry-policies#section-eager-ttl)?
With that setting the expired entries will be removed on the next access 
instead of doing that in the background.
If you add entries in large batches but access them more uniformly over time, 
it could redistribute the cost of expiring the data more evenly.

Thanks,
Stan

From: scottmf
Sent: 9 марта 2018 г. 6:51
To: user@ignite.apache.org
Subject: eviction performance

Hi,
I am prototyping using ignite to ingest lots of short lived events in a
system.  In order to ingest the data i'm using the kafka streamer mechanism. 
I'm pushing 20k events / second into the system into a partitioned off heap
cache.  I've run into a few performance issues but I've been able to wiggle
my way out of it using information from the forums and ignite docs.  But the
latest issue I can't seem to find any information about.

My cache is setup to evict the data after one hour.  The insertion
performance degrades substantially when the eviction mechanism kicks in
where the latency of the insertion continues to degrade over time.

WRT to the OS, memory and disk none of them are saturated.  The load avg is
low, the gc is not under pressure and the disk is not fully utilized.

I've experimented around with different eviction times to make sure that it
is indeed the eviction that is causing this.  After some idle time I startup
our event simulator and wait for the evictions to start and sure enough the
latency of insertions increases immediately when evictions start.

Each node is 32 GB / 8 cpus.

Here are my jvm opts: "-Xmx12g -Xms12g -XX:MaxDirectMemorySize=10g
-XX:PermSize=192m -XX:MaxPermSize=192m -XX:+UseG1GC -XX:ConcGCThreads=12
-XX:ParallelGCThreads=22 -XX:MaxGCPauseMillis=1000 -XX:G1HeapWastePercent=2
-XX:G1ReservePercent=15 -XX:+UnlockExperimentalVMOptions
-XX:G1OldCSetRegionThresholdPercent=15 -XX:G1MixedGCLiveThresholdPercent=90
-XX:G1MaxNewSizePercent=25 -Dcom.sun.management.jmxremote
-XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError"

I was wondering if there is no tuning I can do to avoid this overhead, I am
partitioning my data by time - is there a performant way to clear a
partition in a cache one at a time?  (i was thinking of something similar to
a truncate command in a rdbms)

Here is my schema:
CachePojo {
@QuerySqlField(index=false)
private String buffer0;
@QuerySqlField(index=true)
private String buffer1;
@QuerySqlField
private String buffer2;
@QuerySqlField(index=true, inlineSize=8, descending=true)
private Timestamp timestamp;
@QuerySqlField(index=false)
private Map fields = new HashMap<>();
}

Ignite config:
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setIgniteHome(igniteHome);
// this is probably not needed
igniteConfiguration.setTransactionConfiguration(new
TransactionConfiguration());
igniteConfiguration.setPeerClassLoadingEnabled(true);
igniteConfiguration.setIncludeEventTypes(new int[0]);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi());
igniteConfiguration.setDataStreamerThreadPoolSize(8);
igniteConfiguration.setPublicThreadPoolSize(16);
igniteConfiguration.setSystemThreadPoolSize(16);
DataStorageConfiguration dsc = dataStorageConfiguration();
igniteConfiguration.setDataStorageConfiguration(dsc);

dataStorageConfig:
DataStorageConfiguration dataStorageConfiguration = new
DataStorageConfiguration();
DataRegionConfiguration dataRegionConfiguration =
dataRegionConfiguration();
   
dataStorageConfiguration.setDefaultDataRegionConfiguration(dataRegionConfiguration);
   
dataStorageConfiguration.setDataRegionConfigurations(largeDataRegionConfiguration());
dataStorageConfiguration.setPageSize(8192);
dataStorageConfiguration.setMetricsEnabled(true);
dataStorageConfiguration.setWriteThrottlingEnabled(true);
   
dataStorageConfiguration.setStoragePath("/var/lib/ignite/persistence");
dataStorageConfiguration.setWalMode(WALMode.NONE);
dataStorageConfiguration.setWalPath("/var/lib/ignite/wal");
   
dataStorageConfiguration.setWalArchivePath("/var/lib/ignite/wal/archive");

cacheDataRegionConfig:
DataRegionConfiguration dataRegionConfiguration = new
DataRegionConfiguration();
dataRegionConfiguration.setPersistenceEnabled(true);
dataRegionConfiguration.setName("dataRegion");
   
dataRegionConfiguration.setPageEvictionMode(DataPageEvictionMode.RANDOM_2_LRU);
dataRegionConfiguration.setInitialSize(500l * 1024 * 1024);
dataRegionConfiguration.setMaxSize(8l * 1024 * 1024 * 1024);

Re: And again... Failed to get page IO instance (page content is corrupted)

2018-03-09 Thread Sergey Sergeev
Hi Mikhail,

Unfortunately, the problem has repeated itself on ignite-core-2.3.3

27.02.18 00:27:55 ERROR  GridCacheIoManager - Failed to process message
[senderId=8f99c887-cd4b-4c38-a649-ca430040d535, messageType=class
o.a.i.i.processors.cache.distributed.dht.atomic.
GridNearAtomicUpdateResponse]
org.apache.ignite.IgniteException: Runtime failure on bounds: [lower=null,
upper=PendingRow []]
at org.apache.ignite.internal.processors.cache.persistence.
tree.BPlusTree.find(BPlusTree.java:954) ~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
tree.BPlusTree.find(BPlusTree.java:933) ~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.
IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:979)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.
*GridCacheTtlManager.expire*(GridCacheTtlManager.java:197)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.
GridCacheUtils.unwindEvicts(GridCacheUtils.java:833)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.
onMessageProcessed(GridCacheIoManager.java:1099)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.
processMessage(GridCacheIoManager.java:1072) ~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.
GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.
handleMessage(GridCacheIoManager.java:378) ~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.
handleMessage(GridCacheIoManager.java:304) ~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.
GridCacheIoManager.access$100(GridCacheIoManager.java:99)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.
onMessage(GridCacheIoManager.java:293) ~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.managers.communication.
GridIoManager.invokeListener(GridIoManager.java:1555)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.managers.communication.GridIoManager.
processRegularMessage0(GridIoManager.java:1183)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.managers.communication.
GridIoManager.access$4200(GridIoManager.java:126)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.managers.communication.
GridIoManager$9.run(GridIoManager.java:1090) ~[ignite-core-2.3.3.jar:2.3.3]
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
~[ignite-core-2.3.3.jar:2.3.3]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Caused by: java.lang.IllegalStateException: Failed to get page IO instance
(page content is corrupted)
at org.apache.ignite.internal.processors.cache.persistence.
tree.io.IOVersions.forVersion(IOVersions.java:83)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
tree.io.IOVersions.forPage(IOVersions.java:95)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:148)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.tree.
PendingRow.initKey(PendingRow.java:72) ~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.tree.
PendingEntriesTree.getRow(PendingEntriesTree.java:118)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.tree.
PendingEntriesTree.getRow(PendingEntriesTree.java:31)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
tree.BPlusTree$ForwardCursor.fillFromBuffer(BPlusTree.java:4539)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
tree.BPlusTree$ForwardCursor.init(BPlusTree.java:4441)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
tree.BPlusTree$ForwardCursor.access$5300(BPlusTree.java:4380)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
tree.BPlusTree.findLowerUnbounded(BPlusTree.java:910)
~[ignite-core-2.3.3.jar:2.3.3]
at org.apache.ignite.internal.processors.cache.persistence.
tree.BPlusTree.find(BPlusTree.java:942) ~[ignite-core-2.3.3.jar:2.3.3]
... 17 more


And if we are reading from cache...


27.02.18 00:27:56 ERROR MessagePartProcessingHandler - error processing
incoming