Modification/Update is also done from client side in the same refresh() method 
where modification follows deletion!

getGemfireTemplate().putAll(result);

Here result is a map which contains Key -> pdxInstance.

1.       Let say I have json {uuid: “a-b-c-d”, intField: 21}  [Region has entry 
with Key = “abcd” --> value = PDXInstance]

2.       New update {uuid: “a-b-c-d”,intField: 22} [Region entry with 
Key=”abcd” would be replaced by new PDXInstance now]

It may happen that oldentry is missing intField on which index was designed and 
now newentry contains intField in which case index would be added and vice 
versa for deletion.

Regards,
Dharam

From: Jason Huynh [mailto:[email protected]]
Sent: Wednesday, May 03, 2017 11:18 PM
To: [email protected]
Subject: Re: IndexMaintenanceException while remove old keys correctly from 
Compact Map Range Index

How are you updating the 8 records from the database?  Is it being done 
directly on the server?  Is it an actual region.put(key, 
JSONFormatter.format(newString))?

Is there any chance that the object in the cache is being modified underneath 
the index?

I am not quite sure how the values were indexed properly and then now unable to 
be removed unless there is something occurring that is changing the value into 
an integer from a pdxstring.


On Wed, May 3, 2017 at 10:36 AM Thacker, Dharam 
<[email protected]<mailto:[email protected]>> wrote:
Hi Huynh,

Entries are being put using GemfirreTemplate from client side.
gemfireTemplate.putAll(map) [Where map --> Map<String, PdxInsatnce> And 
PdxInstance --> Jsonformatter.fromjson(jsonString)]

Yes you are right. They are being modified before deletion. Let me describe 
those steps.


1.       Let’s say cache has 10 records and database has 8 records.

2.       Calculating delta is costly as well as emptying region and reloading 
is also not valid option for us due to realtime use case of the same region data

3.       So we first update all 8 records from database into cache which will 
replace “Value” part of region if key matches

4.       Then question comes about those extra 2 records which still exists 
into cache but not in database

5.       So we calculate key different of (cacheKeys – databaseKeys) [A MINUS 
B] and delete unwanted keys from cache

Regards,
Dharam

From: Jason Huynh [mailto:[email protected]<mailto:[email protected]>]
Sent: Wednesday, May 03, 2017 10:53 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: IndexMaintenanceException while remove old keys correctly from 
Compact Map Range Index

Hi Dharam,

How are the entries being put into the region?  Are they put through a client 
or is it done on the server itself?

Does the put command look similar to :
region.put(key, JSONFormatter.fromJSON(someJsonString))?

Somehow the object that is being removed is now returning an Integer instead of 
a PdxString for it's index value (either intField or uuid, depending on which 
index is failing).  I am not sure exactly how that would happen at this point, 
but it will help to know exactly how these objects are put and how they are 
updated and removed.  Are they being modified at any point?

What version are you currently using?  I'm going to guess 1.0?

Is there an actual domain object for these objects that some values are being 
deserialized to on the server?

-Jason Huynh



On Wed, May 3, 2017 at 3:36 AM Thacker, Dharam 
<[email protected]<mailto:[email protected]>> wrote:
Hi Team,

Could you guide us with below exception?

How did we get that?

Step1: Our region is already loaded with 1M+ records 
[Region<String,PdxInstance> -- PDXInstance is JsonString using JsonFormatter]
Step2: When client is instructed about bulk updates in database, we calculate 
diff of keys using (cache - database) to remove stale deleted entries from 
region
Step3: To do the same, we run below loop,


for(Object key : cacheKeys ){
template.remove(key);
}

Region def:

<geode:replicated-region id="Event"
                     cache-ref="geodeCache"
                     scope="distributed-ack"
                     key-constraint="java.lang.String"
                     value-constraint="org.apache.geode.pdx.PdxInstance"
                     shortcut="REPLICATE_PERSISTENT_OVERFLOW"
                     persistent="true"
                     disk-synchronous="false"
                     disk-store-ref="event_disk_store">
                     <geode:cache-loader ref="eventCacheMissDbLoader"/>
       </geode:replicated-region>

       <geode:index id="event_uuid_indx" expression="i.uuid" from="/Event i" 
cache-ref="geodeCache"/>
       <geode:index id="event_intField_indx" expression="i.intField" 
from="/Event i" cache-ref="geodeCache" type="FUNCTIONAL"/>

Which results into below exception >>

java.lang.ClassCastException: org.apache.geode.pdx.internal.PdxString cannot be 
cast to java.lang.Integer
        at java.lang.Integer.compareTo(Integer.java:52)
        at 
org.apache.geode.cache.query.internal.types.ExtendedNumericComparator.compare(ExtendedNumericComparator.java:49)
        at 
java.util.concurrent.ConcurrentSkipListMap.cpr(ConcurrentSkipListMap.java:655)
        at 
java.util.concurrent.ConcurrentSkipListMap.findPredecessor(ConcurrentSkipListMap.java:682)
        at 
java.util.concurrent.ConcurrentSkipListMap.doGet(ConcurrentSkipListMap.java:781)
        at 
java.util.concurrent.ConcurrentSkipListMap.get(ConcurrentSkipListMap.java:1546)
        at 
org.apache.geode.cache.query.internal.index.MemoryIndexStore.basicRemoveMapping(MemoryIndexStore.java:308)
        at 
org.apache.geode.cache.query.internal.index.MemoryIndexStore.removeMapping(MemoryIndexStore.java:286)
        at 
org.apache.geode.cache.query.internal.index.CompactRangeIndex$IMQEvaluator.applyProjection(CompactRangeIndex.java:1695)
        at 
org.apache.geode.cache.query.internal.index.CompactRangeIndex$IMQEvaluator.doNestedIterations(CompactRangeIndex.java:1627)
        at 
org.apache.geode.cache.query.internal.index.CompactRangeIndex$IMQEvaluator.doNestedIterations(CompactRangeIndex.java:1637)
        at 
org.apache.geode.cache.query.internal.index.CompactRangeIndex$IMQEvaluator.evaluate(CompactRangeIndex.java:1477)
        ... 23 common frames omitted
Wrapped by: org.apache.geode.cache.query.internal.index.IMQException: null
        at 
org.apache.geode.cache.query.internal.index.CompactRangeIndex$IMQEvaluator.evaluate(CompactRangeIndex.java:1491)
        at 
org.apache.geode.cache.query.internal.index.CompactRangeIndex.removeMapping(CompactRangeIndex.java:167)
        at 
org.apache.geode.cache.query.internal.index.AbstractIndex.removeIndexMapping(AbstractIndex.java:511)
        at 
org.apache.geode.cache.query.internal.index.IndexManager.processAction(IndexManager.java:1111)
        at 
org.apache.geode.cache.query.internal.index.IndexManager.updateIndexes(IndexManager.java:967)
        at 
org.apache.geode.cache.query.internal.index.IndexManager.updateIndexes(IndexManager.java:941)
        at 
org.apache.geode.internal.cache.AbstractRegionEntry.destroy(AbstractRegionEntry.java:815)
        ... 17 common frames omitted
Wrapped by: org.apache.geode.cache.query.IndexMaintenanceException: 
org.apache.geode.cache.query.internal.index.IMQException
        at 
org.apache.geode.internal.cache.AbstractRegionEntry.destroy(AbstractRegionEntry.java:820)
        at 
org.apache.geode.internal.cache.AbstractRegionMap.destroyEntry(AbstractRegionMap.java:3038)
        at 
org.apache.geode.internal.cache.AbstractRegionMap.destroy(AbstractRegionMap.java:1386)
        at 
org.apache.geode.internal.cache.LocalRegion.mapDestroy(LocalRegion.java:7019)
        at 
org.apache.geode.internal.cache.LocalRegion.mapDestroy(LocalRegion.java:6991)
        at 
org.apache.geode.internal.cache.LocalRegionDataView.destroyExistingEntry(LocalRegionDataView.java:55)
        at 
org.apache.geode.internal.cache.LocalRegion.basicDestroy(LocalRegion.java:6956)
        at 
org.apache.geode.internal.cache.DistributedRegion.basicDestroy(DistributedRegion.java:1738)
        at 
org.apache.geode.internal.cache.LocalRegion.basicBridgeDestroy(LocalRegion.java:5801)
        at 
org.apache.geode.internal.cache.tier.sockets.command.Destroy65.cmdExecute(Destroy65.java:232)
        at 
org.apache.geode.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:147)
        at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)
        at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.doOneMessage(ServerConnection.java:913)
        at 
org.apache.geode.internal.cache.tier.sockets.ServerConnection.run(ServerConnection.java:1143)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at 
org.apache.geode.internal.cache.tier.sockets.AcceptorImpl$1$1.run(AcceptorImpl.java:546)
        ... 1 common frames omitted
Wrapped by: org.apache.geode.cache.client.ServerOperationException: remote 
server on XXXXXX: : While performing a remote destroy
        at 
org.apache.geode.cache.client.internal.AbstractOp.processAck(AbstractOp.java:263)
        at 
org.apache.geode.cache.client.internal.DestroyOp$DestroyOpImpl.processResponse(DestroyOp.java:201)
        at 
org.apache.geode.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:176)
        at 
org.apache.geode.cache.client.internal.AbstractOp.attempt(AbstractOp.java:388)
        at 
org.apache.geode.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:272)
        at 
org.apache.geode.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:328)
        at 
org.apache.geode.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:937)
        at 
org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155)
        at 
org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:110)
        at 
org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:697)
        at 
org.apache.geode.cache.client.internal.DestroyOp.execute(DestroyOp.java:93)

Regards,
Dharam

This message is confidential and subject to terms at: 
http://www.jpmorgan.com/emaildisclaimer<http://www.jpmorgan.com/emaildisclaimer>
 including on confidentiality, legal privilege, viruses and monitoring of 
electronic messages. If you are not the intended recipient, please delete this 
message and notify the sender immediately. Any unauthorized use is strictly 
prohibited.

This message is confidential and subject to terms at: 
http://www.jpmorgan.com/emaildisclaimer<http://www.jpmorgan.com/emaildisclaimer>
 including on confidentiality, legal privilege, viruses and monitoring of 
electronic messages. If you are not the intended recipient, please delete this 
message and notify the sender immediately. Any unauthorized use is strictly 
prohibited.

This message is confidential and subject to terms at: 
http://www.jpmorgan.com/emaildisclaimer including on confidentiality, legal 
privilege, viruses and monitoring of electronic messages. If you are not the 
intended recipient, please delete this message and notify the sender 
immediately. Any unauthorized use is strictly prohibited.

Reply via email to