[
https://issues.apache.org/jira/browse/GEODE-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16567802#comment-16567802
]
Nilkanth Patel commented on GEODE-5518:
---------------------------------------
As you said putAll with 1 record works fine, One possible reason could be, you
are hitting duplicate records while creating a map for putAll, in that case,
even though you are making a batch of 1000, if its duplicate (Map Key), record
will be over written by the existing record and effectively ingestion into
region would be lesser. Pls check whether are you hitting this case or not.
> some records in the region are not fetched when executing fetch query
> ---------------------------------------------------------------------
>
> Key: GEODE-5518
> URL: https://issues.apache.org/jira/browse/GEODE-5518
> Project: Geode
> Issue Type: Bug
> Components: core, querying
> Reporter: yossi reginiano
> Priority: Major
>
> hi all,
> we are using geode 1.4 and facing the following:
> we are starting to adopt the putAll functions which accepts a bulk of records
> and persists them into the region
> we have noticed that the process that fetches the records from the region
> (executing fetch command with bulks of 1000) , from time to time missing a
> record or two , which is causing this records to be left in the region as a
> "Zombie" - because now current index is greater then this record's index
> now this started to happen only when we started to use the putAll function -
> prior to this we did not face any such issue
> also - when we are using putAll with only 1 record at a time it is also
> working fine
> has anybody faced this?
> is there some constraint on the number of records that can be sent to the
> putAll function?
> thanks in advance
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)