Thanks Evgenii, My cluster is AWS EC2 and below is the disk specification on each node
[cid:[email protected]] Regards, Favas From: Evgenii Zhuravlev <[email protected]> Sent: Tuesday, September 24, 2019 12:44 PM To: [email protected] Subject: Re: Ignite bulk data load issue Hi, Looks like you can have really slow disks. What kind of disk do you have there? I see throttling in logs, Because the write operation is really slow. Evgenii пн, 23 сент. 2019 г. в 13:07, Muhammed Favas <[email protected]<mailto:[email protected]>>: Hi, I need help to figure out the issues identified during the bulk load of data into ignite cluster. My cluster consist of 5 nodes, each with 8 core CPU, 32 GB RAM and 30GB Disk. Also ignite native persistence is enabled for the table. I am trying to load data into my ignite SQL table from csv file using COPY command. Each file consist of 50 Million record and I have numerous file of same size. During the initial time of loading, it was quite fast, but after some time the data load become very slow and now it is taking hours to load even a single file. Below is my observations * When I trigger the load first time after a pause, the CPU usage shows in promising level and that time data load is in higher rate. * After loading 2-3 file, the CPU starts dropping down to less than 1 % and it continue in that state for ever. * Then I stop the loading processes for some time and re-start, it again perform well and after some time the same situation happens. When I checked the log file, I saw certain THREAD WAIT are happening, I believe due to this waits, the CPU is dropping down. I have attached the entire log file. Can some one help me to figure out why it so in ignite? Or is it something I have made wrong in my configuration. Below is my configuration file content <bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration"> <property name="failureDetectionTimeout" value="30000"/> <!-- Redefining maximum memory size for the cluster node usage. --> <property name="dataStorageConfiguration"> <bean class="org.apache.ignite.configuration.DataStorageConfiguration"> <property name="CheckpointReadLockTimeout" value="0" /> <!-- Set the page size to 4 KB --> <property name="pageSize" value="#{4 * 1024}"/> <!-- Incraese WAL segment size to 1 GB - Default was 64 MB --> <property name="walSegmentSize" value="#{1L * 1024 * 1024 * 1024}"/> <!-- Set the wal segment to 5. Default was 10 --> <property name="walSegments" value="5"/> <!-- Set the wal segment history size to 5. Default was 20 --> <property name="WalHistorySize" value="5"/> <property name="walCompactionEnabled" value="true" /> <property name="walCompactionLevel" value="6" /> <!-- Enable write throttling. --> <property name="writeThrottlingEnabled" value="true"/> <!-- Redefining the default region's settings --> <property name="defaultDataRegionConfiguration"> <bean class="org.apache.ignite.configuration.DataRegionConfiguration"> <property name="persistenceEnabled" value="true"/> <property name="name" value="Default_Region"/> <!-- Setting the size of the default region to 24GB. --> <property name="maxSize" value="#{24L * 1024 * 1024 * 1024}"/> <!-- Increasing the check point buffer size to 2 GB. --> <property name="checkpointPageBufferSize" value="#{2L * 1024 * 1024 * 1024}"/> </bean> </property> </bean> </property> <property name="cacheConfiguration"> <list> <!-- Partitioned cache example configuration --> <bean class="org.apache.ignite.configuration.CacheConfiguration"> <property name="name" value="default*"/> <property name="cacheMode" value="PARTITIONED" /> <property name="atomicityMode" value="TRANSACTIONAL"/> <property name="queryParallelism" value="8" /> </bean> </list> </property> Regards, Favas
