yossi reginiano commented on GEODE-5518:

Anthony - the LIMIT should not affect here - we see this behavior even when we 
do not reach the LIMIT


Jason - the map itself is a sorted one in sense of Index number - we make sure 
of it

the special process is not in Sync with the putAll - it runs in the background 
, we would expect that even if putAll did not finish - to not get a "jump" in 
Indexes like we face but to still get it sorted as we submitted into the putAll

but here u mention that even if putAll got a sorted list - still in some 
scenario the order might not be kept

> some records in the region are not fetched when executing fetch query
> ---------------------------------------------------------------------
>                 Key: GEODE-5518
>                 URL: https://issues.apache.org/jira/browse/GEODE-5518
>             Project: Geode
>          Issue Type: Bug
>          Components: core, querying
>            Reporter: yossi reginiano
>            Priority: Major
> hi all,
> we are using geode 1.4 and facing the following:
> we are starting to adopt the putAll functions which accepts a bulk of records 
> and persists them into the region
> we have noticed that the process that fetches the records from the region 
> (executing fetch command with bulks of 1000) , from time to time missing a 
> record or two , which is causing this records to be left in the region as a 
> "Zombie" - because now current index is greater then this record's index
> now this started to happen only when we started to use the putAll function - 
> prior to this we did not face any such issue
> also - when we are using putAll with only 1 record at a time it is also 
> working fine
> has anybody faced this?
> is there some constraint on the number of records that can be sent to the 
> putAll function?
> thanks in advance

This message was sent by Atlassian JIRA

Reply via email to