Hello!

Since your key is composite, it's hard to estimate its length (BinaryObject
has a lot of overhead). It's recommended to increase
IGNITE_MAX_INDEX_PAYLOAD_SIZE
until you no longer see this problem. Please try e.g. 128.

You can also try specifying that via setSqlIndexMaxInlineSize().

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 16:21, BinaryTree <[email protected]>:

> Hi Ilya -
> First of all, thank for your reply!
> Here is my cache configuration:
>
> private static CacheConfiguration<DpKey, BinaryObject> 
> getCacheConfiguration(IgniteConfiguration cfg) {
>
>     CacheConfiguration<DpKey, BinaryObject> cacheCfg = new 
> CacheConfiguration();
>     cacheCfg.setName(IgniteCacheKey.DATA_POINT_NEW.getCode());
>     cacheCfg.setCacheMode(CacheMode.PARTITIONED);
>     cacheCfg.setBackups(1);
>     cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>     cacheCfg.setDataRegionName(Constants.FIVE_GB_PERSISTENCE_REGION);
>     
> cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
>     cacheCfg.setWriteThrough(true);
>     cacheCfg.setWriteBehindEnabled(true);
>     cacheCfg.setWriteBehindFlushThreadCount(2);
>     cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
>     cacheCfg.setWriteBehindFlushSize(409600);
>     cacheCfg.setWriteBehindBatchSize(1024);
>     cacheCfg.setStoreKeepBinary(true);
>     cacheCfg.setQueryParallelism(16);
>
>     //2M
>     cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
>     cacheCfg.setRebalanceThrottle(100);
>
>     cacheCfg.setSqlIndexMaxInlineSize(256);
>
>     List<QueryEntity> entities = getQueryEntities();
>     cacheCfg.setQueryEntities(entities);
>
>     CacheKeyConfiguration cacheKeyConfiguration = new 
> CacheKeyConfiguration(DpKey.class);
>     cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
>
>     RendezvousAffinityFunction affinityFunction = new 
> RendezvousAffinityFunction();
>     affinityFunction.setPartitions(128);
>     affinityFunction.setExcludeNeighbors(true);
>     cacheCfg.setAffinity(affinityFunction);
>
>
> cfg.setCacheConfiguration(cacheCfg);
> return cacheCfg;
> }
>
>
> private static List<QueryEntity> getQueryEntities() {
>     List<QueryEntity> entities = Lists.newArrayList();
>
>     //配置可见(可被查询)字段
>     QueryEntity entity = new QueryEntity(DpKey.class.getName(), 
> DpCache.class.getName());
>     entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
>
>     LinkedHashMap<String, String> map = new LinkedHashMap<>();
>     map.put("id", "java.lang.String");
>     map.put("gmtCreate", "java.lang.Long");
>     map.put("gmtModified", "java.lang.Long");
>     map.put("devId", "java.lang.String");
>     map.put("dpId", "java.lang.Integer");
>     map.put("code", "java.lang.String");
>     map.put("name", "java.lang.String");
>     map.put("customName", "java.lang.String");
>     map.put("mode", "java.lang.String");
>     map.put("type", "java.lang.String");
>     map.put("value", "java.lang.String");
>     map.put("rawValue", byte[].class.getName());
>     map.put("time", "java.lang.Long");
>     map.put("status", "java.lang.Boolean");
>     map.put("uuid", "java.lang.String");
>
>     entity.setFields(map);
>
>     //配置索引信息
>     QueryIndex devIdIdx = new QueryIndex("devId");
>     devIdIdx.setName("idx_devId");
>     devIdIdx.setInlineSize(32);
>     List<QueryIndex> indexes = Lists.newArrayList(devIdIdx);
>     entity.setIndexes(indexes);
>
>     entities.add(entity);
>
>     return entities;
> }
>
> public class DpKey implements Serializable {
>     private String key;
>     @AffinityKeyMapped
>     private String devId;
>
>     public DpKey() {
>     }
>
>     public DpKey(String key, String devId) {
>         this.key = key;
>         this.devId = devId;
>     }
>
>     public String getKey() {
>         return this.key;
>     }
>
>     public void setKey(String key) {
>         this.key = key;
>     }
>
>     public String getDevId() {
>         return this.devId;
>     }
>
>     public void setDevId(String devId) {
>         this.devId = devId;
>     }
>
>     public boolean equals(Object o) {
>         if (this == o) {
>             return true;
>         } else if (o != null && this.getClass() == o.getClass()) {
>             DpKey key = (DpKey)o;
>             return this.key.equals(key.key);
>         } else {
>             return false;
>         }
>     }
>
>     public int hashCode() {
>         return this.key.hashCode();
>     }
> }
>
> And I have described my issue in this post and some tests I have done :
>
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-td21161.html
>
> ------------------ 原始邮件 ------------------
> *发件人:* "ilya.kasnacheev"<[email protected]>;
> *发送时间:* 2019年2月28日(星期四) 晚上9:03
> *收件人:* "user"<[email protected]>;
> *主题:* Re: Performance degradation in case of high volumes
>
> Hello Justin!
>
> Ignite 2.6 does have IGNITE_MAX_INDEX_PAYLOAD_SIZE system property.
>
> We are talking about primary key here. What is your primary key type? What
> other indexes do you have? Can you provide complete configuration for
> affected tables (including POJOs if applicable?)
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 28 февр. 2019 г. в 15:29, Justin Ji <[email protected]>:
>
>> Ilya -
>>
>> I use ignite 2.6.0, does not have IGNITE_MAX_INDEX_PAYLOAD_SIZE system
>> property.
>> But our index field has a fixed length:25 characters, so where can I find
>> the algorithm to calculate the 'index inline size'.
>>
>> Looking forward to your reply.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

Reply via email to