DonalEvans commented on a change in pull request #6701:
URL: https://github.com/apache/geode/pull/6701#discussion_r675061450



##########
File path: 
geode-apis-compatible-with-redis/src/main/java/org/apache/geode/redis/internal/data/RedisHash.java
##########
@@ -237,15 +238,21 @@ public int hstrlen(byte[] field) {
     return new ArrayList<>(hash.keySet());
   }
 
-  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern,
-      int count,
-      int cursor) {
-
-    ArrayList<byte[]> resultList = new ArrayList<>(count + 2);
+  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern, int 
count, int cursor) {
+    // No need to allocate more space than it's possible to use given the size 
of the hash. We need
+    // to add 1 to hash.size() to ensure that if count > hash.size(), we 
return a cursor of 0
+    long maximumCapacity = 2L * Math.min(count, hlen() + 1);
+    if (maximumCapacity > Integer.MAX_VALUE) {
+      LogService.getLogger().error(
+          "The size of the data to be returned by hscan, {}, exceeds the 
maximum capacity of an array",
+          maximumCapacity);
+      throw new OutOfMemoryError("Requested array size exceeds VM limit");

Review comment:
       Sounds good to me. I checked Redis' code and they don't hit this issue, 
since they're able to allocate lists with a maximum length of Long.MAX_VALUE, 
which is greater than their maximum number of entries in a hash, so this is one 
case where we're not able to exactly match their behaviour. Thankfully, it's 
also a case that should never realistically come up, since it's highly unlikely 
that users will be putting over a billion entries into one Redis hash.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to