Github user galen-pivotal commented on a diff in the pull request: https://github.com/apache/geode/pull/404#discussion_r104549703 --- Diff: geode-core/src/main/java/org/apache/geode/redis/internal/executor/hash/HashInterpreter.java --- @@ -0,0 +1,126 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.redis.internal.executor.hash; + +import java.util.Map; + +import org.apache.geode.cache.Region; +import org.apache.geode.redis.GeodeRedisServer; +import org.apache.geode.redis.internal.ByteArrayWrapper; +import org.apache.geode.redis.internal.Coder; +import org.apache.geode.redis.internal.ExecutionHandlerContext; +import org.apache.geode.redis.internal.RedisDataType; + +/** + * Utility class for interpreting and processing Redis Hash data structure + * + * + */ +public class HashInterpreter { + + /** + * <pre> + * The region:key separator. + * + * REGION_KEY_SEPERATOR = ":" + * + * See Hash section of <a href= + "https://redis.io/topics/data-types">https://redis.io/topics/data-types#Hashes</a> + * </pre> + */ + public static final String REGION_KEY_SEPERATOR = ":"; + + /** + * The default hash region name REGION_HASH_REGION = Coder.stringToByteArrayWrapper("ReDiS_HASH") + */ + public static final ByteArrayWrapper REGION_HASH_REGION = + Coder.stringToByteArrayWrapper(GeodeRedisServer.HASH_REGION); + + /** + * Return the region presenting the hash + * + * @param key the raw Redis command key that may contain the region:key formation + * @param context the exception handler context + * @return the region were the command's data will be processed + */ + @SuppressWarnings("unchecked") + public static Region<ByteArrayWrapper, Map<ByteArrayWrapper, ByteArrayWrapper>> getRegion( --- End diff -- @ggreen yeah, I'd put everything in one region because I think it's easier to understand, and because it's much cheaper to create new hash objects in a region than it is to create new regions. Though if you want to see my hidden agenda, look ahead: if I were to redesign data storage in the Redis adapter, I think I would do away with the separate region per type and a metadata region that just stores types, and implement the whole thing as one region that stored collections of all the types we support. During the lookup we could catch a `ClassCastException` if the key is the wrong type, and then we'd propagate that up as the same error Redis throws when you try to modify a key that is of the wrong type. Storing collections as Java objects rather than via some translation to a Region means that as the objects get larger, the cost of transferring them between members in the system increases as well. Geode contains a `Delta` interface that I think we could use to avoid the overhead of transferring the whole object every time. Then the only downside is that we can't scale a single hash/set/list across servers, which I think is fine -- do you really need to store a list in Redis that takes more than however many gigabytes of RAM are in a single Geode instance? Some folks on the user/dev lists seem to disagree with this view, though, so take it with a bit of salt. The other thing I'd like to look at to implement this is how we want to store and process data -- the whole `ByteArrayWrapper`/`String` thing could be simplified, but I think that's fairly orthogonal to the region and command interpreter implementation. I don't know that much about the end users of Redis, though, so I'm curious: what are you using the Redis adapter for? What use case are you optimizing for?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---