dschneider-pivotal commented on a change in pull request #6701:
URL: https://github.com/apache/geode/pull/6701#discussion_r674220656



##########
File path: 
geode-apis-compatible-with-redis/src/main/java/org/apache/geode/redis/internal/data/RedisHash.java
##########
@@ -237,15 +238,21 @@ public int hstrlen(byte[] field) {
     return new ArrayList<>(hash.keySet());
   }
 
-  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern,
-      int count,
-      int cursor) {
-
-    ArrayList<byte[]> resultList = new ArrayList<>(count + 2);
+  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern, int 
count, int cursor) {
+    // No need to allocate more space than it's possible to use given the size 
of the hash. We need
+    // to add 1 to hash.size() to ensure that if count > hash.size(), we 
return a cursor of 0
+    long maximumCapacity = 2L * Math.min(count, hlen() + 1);
+    if (maximumCapacity > Integer.MAX_VALUE) {
+      LogService.getLogger().error(
+          "The size of the data to be returned by hscan, {}, exceeds the 
maximum capacity of an array",
+          maximumCapacity);
+      throw new OutOfMemoryError("Requested array size exceeds VM limit");

Review comment:
       I think OutOfMemoryError should be reserved for when you actually run 
out of heap. In this case you are preventing that from happening by not doing a 
large allocation. Some code will no longer trust the state of the jvm if it 
sees an OutOfMemoryError and will initiate a shutdown. You could use 
IllegalArgumentException instead. Whatever exception is thrown here should it 
be caught in HScanExecutor and converted to RedisResponse.error?

##########
File path: 
geode-apis-compatible-with-redis/src/main/java/org/apache/geode/redis/internal/data/RedisHash.java
##########
@@ -237,15 +238,21 @@ public int hstrlen(byte[] field) {
     return new ArrayList<>(hash.keySet());
   }
 
-  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern,
-      int count,
-      int cursor) {
-
-    ArrayList<byte[]> resultList = new ArrayList<>(count + 2);
+  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern, int 
count, int cursor) {
+    // No need to allocate more space than it's possible to use given the size 
of the hash. We need
+    // to add 1 to hash.size() to ensure that if count > hash.size(), we 
return a cursor of 0

Review comment:
       change hash.size() to hlen()

##########
File path: 
geode-apis-compatible-with-redis/src/main/java/org/apache/geode/redis/internal/data/RedisHash.java
##########
@@ -237,15 +238,21 @@ public int hstrlen(byte[] field) {
     return new ArrayList<>(hash.keySet());
   }
 
-  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern,
-      int count,
-      int cursor) {
-
-    ArrayList<byte[]> resultList = new ArrayList<>(count + 2);
+  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern, int 
count, int cursor) {
+    // No need to allocate more space than it's possible to use given the size 
of the hash. We need
+    // to add 1 to hash.size() to ensure that if count > hash.size(), we 
return a cursor of 0
+    long maximumCapacity = 2L * Math.min(count, hlen() + 1);
+    if (maximumCapacity > Integer.MAX_VALUE) {
+      LogService.getLogger().error(

Review comment:
       I'm not sure what we should do about logging in these cases. Normally we 
do not log something in a server when a client does something wrong. We just 
have the server respond to the client with an error message. I know in some 
cases we are limited by the redis standard error responses in giving the user 
extra information about what they did wrong. In that case we could add some 
info in the server log but maybe it would be at the "info" log level. The 
"error" level is usually reserved for when something unexpected happened on the 
server but it thinks it can keep running. The "warning" level is if things 
don't look healthy (but it might be okay) and we want to get the admin's 
attention. The "fatal" level is if something is wrong and the server needs to 
shutdown.
   In this case I would not log anything here if we can tell the client what 
was wrong. If we can't get a response back to the client then I think this 
should be "info" and describe what the client did wrong.

##########
File path: 
geode-apis-compatible-with-redis/src/main/java/org/apache/geode/redis/internal/data/RedisHash.java
##########
@@ -237,15 +238,21 @@ public int hstrlen(byte[] field) {
     return new ArrayList<>(hash.keySet());
   }
 
-  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern,
-      int count,
-      int cursor) {
-
-    ArrayList<byte[]> resultList = new ArrayList<>(count + 2);
+  public ImmutablePair<Integer, List<byte[]>> hscan(Pattern matchPattern, int 
count, int cursor) {
+    // No need to allocate more space than it's possible to use given the size 
of the hash. We need
+    // to add 1 to hash.size() to ensure that if count > hash.size(), we 
return a cursor of 0
+    long maximumCapacity = 2L * Math.min(count, hlen() + 1);
+    if (maximumCapacity > Integer.MAX_VALUE) {

Review comment:
       Where you able to test this in the integration test? I know you did in 
the unit test but it is hard to do when you need  the hash to have more than 1G 
entries. If you could use the same hlen() trick in the integration test that 
you used in the unit test then you could test how this exception will be 
handled by the higher levels of the code.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to