[ https://issues.apache.org/jira/browse/GEODE-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17110665#comment-17110665 ]
ASF subversion and git services commented on GEODE-8127: -------------------------------------------------------- Commit e084aa4bdaed4d626e559fec4e7e64e1fbf93755 in geode's branch refs/heads/feature/GEODE-8067 from Darrel Schneider [ https://gitbox.apache.org/repos/asf?p=geode.git;h=e084aa4 ] GEODE-8130: use a single region for redis Sets and Hashes (#5120) * Replaced replicated metaRegion with a partitioned dataRegion. Currently the dataRegion is used the same way as the metaRegion except for sets and hashes which store their actual data in it. * Exception handling now correctly deals with FunctionException * Disabled a test until GEODE-8127 if fixed. * Now uses the ByteArrayWrapper as the key on the meta region and the locks map instead of using a String. Since a ByteArrayWrapper is used as the key in the data region this will end up saving memory. * Found a problem with redis dynamic region management. Some of the code was executing when we added a new set or hash to the metaDataRegion. It was only ignoring STRING and HLL. This caused some extra memory to be used for every redis set/hash. Now the dynamic region code is only used for lists and sortedSet. * This commit has some TODO comments of what looks like a bug in the dynamic region code when a new server is started. It looks like the new server will not create already existing dynamic regions. We could test this by starting one server, create a LIST, then start another server, and then shutdown the first server. Does the LIST still exist? If we change them not to use dynamic regions then this issue will go away. > redis function+delta may not always execute function on primary > --------------------------------------------------------------- > > Key: GEODE-8127 > URL: https://issues.apache.org/jira/browse/GEODE-8127 > Project: Geode > Issue Type: Bug > Components: redis > Reporter: Darrel Schneider > Assignee: Darrel Schneider > Priority: Major > > The redis use of regions depends on the code that will modify the region that > is storing redis data, to always execute on the primary. It thought it was > accomplishing this by marking the function as "optimizeForWrite=true" and by > routing the function to the node with the bucket using "withFilter(key)". > This works most of the time. But in some cases the function executes on a > redundant copy. It looks like what is happening is that at the time the > function is dispatched it has one idea of who the primary is and sends the > function to that node. But before it executes the primary moves from this > node to another that is doing redundancy recovery. Then when our function > finally does a "put" on the localDataSet it ends up being a remote operation > that is sent to the other node. > If our redis function could get a lock that prevents the bucket primary > status from changing (see BucketRegion doLockForPrimary) and then check to > see if we are the primary (if not throw an exception that causes the function > sender to retry (see BucketMovedException) otherwise execute the function > and at the end release the lock (see BucketRegion doUnlockForPrimary). > We could enable this with a new method added to Function (much like the > existing isHA and optimizeForWrite). This new method could be > executeOnPrimary and default to false (adding a default method to the > Function interface will not cause backwards compatibility issues unless a > current class that implements Function already had added a method named > "executeOnPrimary"). -- This message was sent by Atlassian Jira (v8.3.4#803005)