Hello!

I've not heard of issues such as this one. It would help if you have a
reproducer (one which creates a lot of load and detects cases like these).

Regards,
-- 
Ilya Kasnacheev


вт, 21 апр. 2020 г. в 14:39, neerajarora100 <[email protected]>:

>
> I have a table in which during the performance runs, there are inserts
> happening in the beginning when the job starts, during the insertion time
> there are also parallel operations(GET/UPDATE queries) happening on that
> table. The Get operation also updates a value in column marking that record
> as picked. However, the next get performed on the table would again return
> back the same record even when the record was marked in progress.
>
> P.S. --> both the operations are done by the same single thread existing in
> the system. Logs below for reference, record marked in progress at Line 1
> on
> **20:36:42,864**, however, it is returned back in the result set of query
> executed after **20:36:42,891** by the same thread.
> We also observed that during high load (usually during same scenario as
> mentioned above) some update operation (intermittent) were not happening on
> the table even when the update executed successfully (validated using the
> returned result and then doing a get just after that to check the updated
> value ) without throwing an exception.
>
>
> 13 Apr 2020 20:36:42,864 [SHT-4083-initial] FINEST  -
> AbstractCacheHelper.markContactInProgress:2321 -  Action state after mark
> in
> progresss contactId.ATTR=: 514409 for jobId : 4083 is actionState : 128
>
> 13 Apr 2020 20:36:42,891 [SHT-4083-initial] FINEST  -
> CacheAdvListMgmtHelper.getNextContactToProcess:347 - Query : select
> priority, contact_id, action_state, pim_contact_store_id, action_id
> , retry_session_id, attempt_type, zone_id, action_pos  from pim_4083 where
> handler_id = ? and attempt_type != ?  and next_attempt_after <= ? and
> action_state = ? and exclude_flag = ?  order
> by attempt_type desc, priority desc, next_attempt_after asc,contact_id
> asc
> limit 1
>
>
> This happens usually during the performance runs when there are parallel
> JOB's started which are working on Ignite. Can anyone suggest what can be
> done to avoid such a situation..?
>
> We have 2 ignite data nodes that are deployed as springBootService deployed
> in the cluster being accessed, by 3 client nodes with 6GB of RAM and
> peristence enabled.
> Ignite version -> 2.7.6, Cache configuration is as follows,
>
>     IgniteConfiguration cfg = new IgniteConfiguration();
>        CacheConfiguration cachecfg = new CacheConfiguration(CACHE_NAME);
>        cachecfg.setRebalanceThrottle(100);
>        cachecfg.setBackups(1);
>        cachecfg.setCacheMode(CacheMode.REPLICATED);
>        cachecfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>        cachecfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
>
> cachecfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>        // Defining and creating a new cache to be used by Ignite Spring
> Data
> repository.
>        CacheConfiguration ccfg = new CacheConfiguration(CACHE_TEMPLATE);
>        ccfg.setStatisticsEnabled(true);
>        ccfg.setCacheMode(CacheMode.REPLICATED);
>        ccfg.setBackups(1);
>            DataStorageConfiguration dsCfg = new DataStorageConfiguration();
>
> dsCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
>                dsCfg.setStoragePath(storagePath);
>                dsCfg.setWalMode(WALMode.FSYNC);
>                dsCfg.setWalPath(walStoragePath);
>                dsCfg.setWalArchivePath(archiveWalStoragePath);
>                dsCfg.setWriteThrottlingEnabled(true);
>                cfg.setAuthenticationEnabled(true);
>            dsCfg.getDefaultDataRegionConfiguration()
>                 .setInitialSize(Long.parseLong(cacheInitialMemSize) * 1024
> *
> 1024);
>
>
> dsCfg.getDefaultDataRegionConfiguration().setMaxSize(Long.parseLong(cacheMaxMemSize)
> * 1024 * 1024);
>            cfg.setDataStorageConfiguration(dsCfg);
>
>        cfg.setClientConnectorConfiguration(clientCfg);
>        // Run the command to alter the default user credentials
>        // ALTER USER "ignite" WITH PASSWORD 'new_passwd'
>        cfg.setCacheConfiguration(cachecfg);
>        cfg.setFailureDetectionTimeout(Long.parseLong(cacheFailureTimeout));
>        ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
> ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>        ccfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>        ccfg.setRebalanceThrottle(100);
>        int pool = cfg.getSystemThreadPoolSize();
>            cfg.setRebalanceThreadPoolSize(2);
>        cfg.setLifecycleBeans(new MyLifecycleBean());
>        logger.info(methodName, "Starting ignite service");
>        ignite = Ignition.start(cfg);
>        ignite.cluster().active(true);
>        // Get all server nodes that are already up and running.
>        Collection<ClusterNode> nodes =
> ignite.cluster().forServers().nodes();
>        // Set the baseline topology that is represented by these nodes.
>        ignite.cluster().setBaselineTopology(nodes);
>        ignite.addCacheConfiguration(ccfg);
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to