I have noticed a strange behaviour when i connect to visor shell after
ingesting large amount of data to ignite cluster. 
Below is the scenario:
I have deployed 5 node Ignite cluster on K8S with persistence enabled
(version 2.9.0 on java 11)
I started ingesting data to 3 tables and after ingesting large amount of
data using JDBC batch insertion (around 20 million records to each of 3
tables with backup set to 1), now i connected to visor shell (from one of
the pod which i deployed just to use as visor shell) using the same ignite
config file which is used for ignite servers and after visor shell connects
to ignite cluster the unwanted wal record cleanup stopped (which should run
post checkpoint ) and WAL started growing linearly as there is continuous
data ingestion. This is making WAL disk run out of space and pods crashes.
I have attached config file which i used to deploy ignite as well as used to
connect to Ignite visor shell.
Please let me know if am doing something wrong.
command used to connect visor shell
./ignitevisorcmd.sh -cfg=/opt/ignite/conf/ignite-config.xml
There are frequent logs with following message after connection of visor,

Could not clear historyMap due to WAL reservation on cp: CheckpointEntry
[id=c7ef72fb-b701-427b-a383-f8035c7985c1, timestamp=1606590778899,
ptr=FileWALPointer [idx=26090, fileOff=24982860, len=9572]], history map
size is 38 


could you please let me know if am doing anything wrong or any known issue
around this?


Regards,
Shiva
 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to