paul-rogers commented on a change in pull request #11828:
URL: https://github.com/apache/druid/pull/11828#discussion_r754658759



##########
File path: 
processing/src/main/java/org/apache/druid/query/context/ResponseContextDeserializer.java
##########
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.query.context;
+
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+import com.fasterxml.jackson.databind.DeserializationContext;
+import com.fasterxml.jackson.databind.deser.std.StdDeserializer;
+
+import java.io.IOException;
+
+@SuppressWarnings("serial")
+public class ResponseContextDeserializer extends 
StdDeserializer<ResponseContext>

Review comment:
       Turns out that this class is not needed in this PR: only used in the 
next response trailer PR. So, removed it from this one.

##########
File path: 
processing/src/main/java/org/apache/druid/query/context/ResponseContext.java
##########
@@ -30,93 +33,334 @@
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.java.util.common.NonnullPair;
-import org.apache.druid.java.util.common.jackson.JacksonUtils;
 import org.apache.druid.query.SegmentDescriptor;
+import org.apache.druid.utils.CollectionUtils;
 import org.joda.time.Interval;
 
 import javax.annotation.Nullable;
+
 import java.io.IOException;
 import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
 import java.util.Comparator;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.ConcurrentSkipListMap;
-import java.util.function.BiFunction;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.stream.Collectors;
 
 /**
  * The context for storing and passing data between chains of {@link 
org.apache.druid.query.QueryRunner}s.
  * The context is also transferred between Druid nodes with all the data it 
contains.
- */
+ * <p>
+ * The response context consists of a set of key/value pairs. Keys are those 
defined in
+ * the {@code Keys} registry. Keys are indexed by key instance, not by name. 
The
+ * key defines the type of the associated value, including logic to merge 
values and
+ * to deserialize JSON values for that key.
+ *
+ * <h4>Structure</h4>
+ * The context has evolved to perform multiple tasks. First, it holds two kinds
+ * of information:
+ * <ul>
+ * <li>Information to be returned in the query response header.
+ * (These are values tagged as {@code HEADER}.)</li>
+ * <li>Values passed within a single server. These are tagged with
+ * visibility {@code NONE}.)</li>
+ * </ul>
+ * Second, it performs multiple tasks:
+ * <ul>
+ * <li>Registers the keys to be used in the header. But, since it also holds
+ * internal information, the internal information also needs keys, though the
+ * corresponding values are never serialized.</li>
+ * <li>Gathers information for the query as a whole.</li>
+ * <li>Merges information back up the query tree: from multiple segments,
+ * from multiple servers, etc.</li>
+ * <li>Manages headers size by dropping fields when the header would get too
+ * large.</li>
+ * </ul>
+ *
+ * A result is that the information the context, when inspected by a calling
+ * query, may be incomplete if some of it was previously dropped by the
+ * called query.
+ *
+ * <h4>API</h4>
+ *
+ * The query profile needs to obtain the full, untruncated information. To do 
this
+ * it piggy-backs on the set operations to obtain the full value. To ensure 
this
+ * is possible, code that works with standard values should call the set (or 
add)
+ * functions provided which will do the needed map update.
+  */
 @PublicApi
 public abstract class ResponseContext
 {
   /**
    * The base interface of a response context key.
    * Should be implemented by every context key.
    */
-  public interface BaseKey
+  public interface Key

Review comment:
       Added `@ExtensionPoint`. How do I record this info so it is available 
when we prepare release notes?

##########
File path: 
processing/src/main/java/org/apache/druid/query/context/ResponseContext.java
##########
@@ -30,93 +33,334 @@
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.java.util.common.NonnullPair;
-import org.apache.druid.java.util.common.jackson.JacksonUtils;
 import org.apache.druid.query.SegmentDescriptor;
+import org.apache.druid.utils.CollectionUtils;
 import org.joda.time.Interval;
 
 import javax.annotation.Nullable;
+
 import java.io.IOException;
 import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
 import java.util.Comparator;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.ConcurrentSkipListMap;
-import java.util.function.BiFunction;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.stream.Collectors;
 
 /**
  * The context for storing and passing data between chains of {@link 
org.apache.druid.query.QueryRunner}s.
  * The context is also transferred between Druid nodes with all the data it 
contains.
- */
+ * <p>
+ * The response context consists of a set of key/value pairs. Keys are those 
defined in
+ * the {@code Keys} registry. Keys are indexed by key instance, not by name. 
The
+ * key defines the type of the associated value, including logic to merge 
values and
+ * to deserialize JSON values for that key.
+ *
+ * <h4>Structure</h4>
+ * The context has evolved to perform multiple tasks. First, it holds two kinds
+ * of information:
+ * <ul>
+ * <li>Information to be returned in the query response header.
+ * (These are values tagged as {@code HEADER}.)</li>
+ * <li>Values passed within a single server. These are tagged with
+ * visibility {@code NONE}.)</li>
+ * </ul>
+ * Second, it performs multiple tasks:
+ * <ul>
+ * <li>Registers the keys to be used in the header. But, since it also holds
+ * internal information, the internal information also needs keys, though the
+ * corresponding values are never serialized.</li>
+ * <li>Gathers information for the query as a whole.</li>
+ * <li>Merges information back up the query tree: from multiple segments,
+ * from multiple servers, etc.</li>
+ * <li>Manages headers size by dropping fields when the header would get too
+ * large.</li>
+ * </ul>
+ *
+ * A result is that the information the context, when inspected by a calling
+ * query, may be incomplete if some of it was previously dropped by the
+ * called query.
+ *
+ * <h4>API</h4>
+ *
+ * The query profile needs to obtain the full, untruncated information. To do 
this
+ * it piggy-backs on the set operations to obtain the full value. To ensure 
this
+ * is possible, code that works with standard values should call the set (or 
add)
+ * functions provided which will do the needed map update.
+  */
 @PublicApi
 public abstract class ResponseContext
 {
   /**
    * The base interface of a response context key.
    * Should be implemented by every context key.
    */
-  public interface BaseKey
+  public interface Key
   {
+    /**
+     * Hack, pure and simple. The symbol "ResponseContext.Key.ETAG" must exist
+     * to avoid changing a line of code where we have no tests, which causes
+     * the build to fail. Remove this once the proper tests are added.
+     *
+     * @see {@link org.apache.druid.server.QueryResource#doPost}
+     */
+    public static final Key ETAG = Keys.ETAG;
+
     @JsonValue
     String getName();
+
+    /**
+     * The phase (header, trailer, none) where this key is emitted.
+     */
+    Visibility getPhase();
+
     /**
-     * Merge function associated with a key: Object (Object oldValue, Object 
newValue)
+     * Reads a value of this key from a JSON stream. Used by {@link 
ResponseContextDeserializer}.
+     */
+    Object readValue(JsonParser jp);
+
+    /**
+     * Merges two values of type T.
+     *
+     * This method may modify "oldValue" but must not modify "newValue".
      */
-    BiFunction<Object, Object, Object> getMergeFunction();
+    Object mergeValues(Object oldValue, Object newValue);
+
+    /**
+     * Returns true if this key can be removed to reduce header size when the
+     * header would otherwise be too large.
+     */
+    @JsonIgnore
+    boolean canDrop();
+  }
+
+  /**
+   * Where the key is emitted, if at all. Values in the context can be for 
internal
+   * use only: for return before the query results (header) or only after query
+   * results (trailer).
+   */
+  public enum Visibility
+  {
+    /**
+     * Keys that are present in both the "X-Druid-Response-Context" header 
*and* the response context trailer.
+     */
+    HEADER,
+
+    /**
+     * Keys that are not present in query responses at all. Generally used for 
internal state tracking within a
+     * single server.
+     */
+    NONE
+  }
+
+  /**
+   * Abstract key class which provides most functionality except the
+   * type-specific merge logic. Parsing is provided by an associated
+   * parse function.
+   */
+  public abstract static class AbstractKey implements Key
+  {
+    private final String name;
+    private final Visibility visibility;
+    private final boolean canDrop;
+    private final Function<JsonParser, Object> parseFunction;
+
+    AbstractKey(String name, Visibility visibility, boolean canDrop, Class<?> 
serializedClass)
+    {
+      this.name = name;
+      this.visibility = visibility;
+      this.canDrop = canDrop;
+      this.parseFunction = jp -> {
+        try {
+          return jp.readValueAs(serializedClass);
+        }
+        catch (IOException e) {
+          throw new RuntimeException(e);
+        }
+      };
+    }
+
+    AbstractKey(String name, Visibility visibility, boolean canDrop, 
TypeReference<?> serializedTypeReference)
+    {
+      this.name = name;
+      this.visibility = visibility;
+      this.canDrop = canDrop;
+      this.parseFunction = jp -> {
+        try {
+          return jp.readValueAs(serializedTypeReference);
+        }
+        catch (IOException e) {
+          throw new RuntimeException(e);
+        }
+      };
+    }
+
+    @Override
+    public String getName()
+    {
+      return name;
+    }
+
+    @Override
+    public Visibility getPhase()
+    {
+      return visibility;
+    }
+
+    @Override
+    public boolean canDrop()
+    {
+      return canDrop;
+    }
+
+    @Override
+    public Object readValue(JsonParser jp)
+    {
+      return parseFunction.apply(jp);
+    }
+
+    @Override
+    public String toString()
+    {
+      return name;
+    }
+  }
+
+  /**
+   * String valued attribute that holds the latest value assigned.
+   */
+  public static class StringKey extends AbstractKey
+  {
+    StringKey(String name, Visibility visibility, boolean canDrop)
+    {
+      super(name, visibility, canDrop, String.class);
+    }
+
+    @Override
+    public Object mergeValues(Object oldValue, Object newValue)
+    {
+      return newValue;
+    }
+  }
+
+  /**
+   * Boolean valued attribute with the semantics that once the flag is
+   * set true, it stays true.
+   */
+  public static class BooleanKey extends AbstractKey
+  {
+    BooleanKey(String name, Visibility visibility)
+    {
+      super(name, visibility, false, Boolean.class);
+    }
+
+    @Override
+    public Object mergeValues(Object oldValue, Object newValue)
+    {
+      return (boolean) oldValue || (boolean) newValue;
+    }
+  }
+
+  /**
+   * Long valued attribute that holds the latest value assigned.
+   */
+  public static class LongKey extends AbstractKey
+  {
+    LongKey(String name, Visibility visibility)
+    {
+      super(name, visibility, false, Long.class);
+    }
+
+    @Override
+    public Object mergeValues(Object oldValue, Object newValue)
+    {
+      return newValue;
+    }
   }
 
   /**
-   * Keys associated with objects in the context.
+   * Long valued attribute that holds the accumulation of values assigned.
+   */
+  public static class CounterKey extends AbstractKey
+  {
+    CounterKey(String name, Visibility visibility)
+    {
+      super(name, visibility, false, Long.class);
+    }
+
+    @Override
+    public Object mergeValues(Object oldValue, Object newValue)
+    {
+      if (oldValue == null) {
+        return newValue;
+      }
+      if (newValue == null) {
+        return oldValue;
+      }
+      return (Long) oldValue + (Long) newValue;
+    }
+  }
+
+  /**
+   * Global registry of response context keys.
+   * <p>
+   * Also defines the standard keys associated with objects in the context.
    * <p>
    * If it's necessary to have some new keys in the context then they might be 
listed in a separate enum:
    * <pre>{@code
-   * public enum ExtensionResponseContextKey implements BaseKey
+   * public class SomeClass
    * {
-   *   EXTENSION_KEY_1("extension_key_1"), EXTENSION_KEY_2("extension_key_2");
+   *   static final Key EXTENSION_KEY_1 = new StringKey(
+   *      "extension_key_1", Visibility.HEADER_AND_TRAILER, true),
+   *   static final Key EXTENSION_KEY_2 = new CounterKey(
+   *      "extension_key_2", Visibility.None);
    *
    *   static {
-   *     for (BaseKey key : values()) ResponseContext.Key.registerKey(key);
+   *     Keys.instance().registerKeys(new Key[] {
+   *        EXTENSION_KEY_1,
+   *        EXTENSION_KEY_2
+   *     });
    *   }
-   *
-   *   private final String name;
-   *   private final BiFunction<Object, Object, Object> mergeFunction;
-   *
-   *   ExtensionResponseContextKey(String name)
-   *   {
-   *     this.name = name;
-   *     this.mergeFunction = (oldValue, newValue) -> newValue;
-   *   }
-   *
-   *   @Override public String getName() { return name; }
-   *
-   *   @Override public BiFunction<Object, Object, Object> getMergeFunction() 
{ return mergeFunction; }
-   * }
    * }</pre>
-   * Make sure all extension enum values added with {@link Key#registerKey} 
method.
+   * Make sure all extension keys are added with the {@link #registerKey(Key)} 
or
+   * {@link #registerKeys(Key[])} methods.
+   * <p>
+   * Predefined key types exist for common values. Custom values can be 
created as
+   * shown below.

Review comment:
       This is meant to explain to extension writers how to create their keys. 
Reworded. Please check if it makes more sense now.

##########
File path: 
processing/src/main/java/org/apache/druid/query/context/ResponseContext.java
##########
@@ -351,87 +754,100 @@ public void merge(ResponseContext responseContext)
   /**
    * Serializes the context given that the resulting string length is less 
than the provided limit.
    * This method removes some elements from context collections if it's needed 
to satisfy the limit.
-   * There is no explicit priorities of keys which values are being truncated 
because for now there are only
-   * two potential limit breaking keys ({@link Key#UNCOVERED_INTERVALS}
-   * and {@link Key#MISSING_SEGMENTS}) and their values are arrays.
-   * Thus current implementation considers these arrays as equal prioritized 
and starts removing elements from
+   * There is no explicit priorities of keys which values are being truncated.
+   * Any kind of key can be removed, the key's @{code canDrop()} attribute 
indicates
+   * which can be dropped. (The unit tests use a string key.)
+   * Thus keys as equally prioritized and starts removing elements from
    * the array which serialized value length is the biggest.
-   * The resulting string might be correctly deserialized to {@link 
ResponseContext}.
+   * The resulting string will be correctly deserialized to {@link 
ResponseContext}.
    */
-  public SerializationResult serializeWith(ObjectMapper objectMapper, int 
maxCharsNumber) throws JsonProcessingException
+  public SerializationResult serializeWith(ObjectMapper objectMapper, int 
maxCharsNumber)
+      throws JsonProcessingException
   {
-    final String fullSerializedString = 
objectMapper.writeValueAsString(getDelegate());
+    final Map<Key, Object> headerMap =
+        getDelegate().entrySet()
+                     .stream()
+                     .filter(entry -> entry.getKey().getPhase() == 
Visibility.HEADER)
+                     .collect(
+                         Collectors.toMap(
+                             Map.Entry::getKey,
+                             Map.Entry::getValue
+                         )
+                     );
+
+    final String fullSerializedString = 
objectMapper.writeValueAsString(headerMap);
     if (fullSerializedString.length() <= maxCharsNumber) {
       return new SerializationResult(null, fullSerializedString);
-    } else {
-      // Indicates that the context is truncated during serialization.
-      add(Key.TRUNCATED, true);
-      final ObjectNode contextJsonNode = 
objectMapper.valueToTree(getDelegate());
-      final ArrayList<Map.Entry<String, JsonNode>> sortedNodesByLength = 
Lists.newArrayList(contextJsonNode.fields());
-      sortedNodesByLength.sort(VALUE_LENGTH_REVERSED_COMPARATOR);
-      int needToRemoveCharsNumber = fullSerializedString.length() - 
maxCharsNumber;
-      // The complexity of this block is O(n*m*log(m)) where n - context size, 
m - context's array size
-      for (Map.Entry<String, JsonNode> e : sortedNodesByLength) {
-        final String fieldName = e.getKey();
-        final JsonNode node = e.getValue();
-        if (node.isArray()) {
-          if (needToRemoveCharsNumber >= node.toString().length()) {
-            // We need to remove more chars than the field's length so 
removing it completely
-            contextJsonNode.remove(fieldName);
-            // Since the field is completely removed (name + value) we need to 
do a recalculation
-            needToRemoveCharsNumber = contextJsonNode.toString().length() - 
maxCharsNumber;
-          } else {
-            final ArrayNode arrayNode = (ArrayNode) node;
-            needToRemoveCharsNumber -= 
removeNodeElementsToSatisfyCharsLimit(arrayNode, needToRemoveCharsNumber);
-            if (arrayNode.size() == 0) {
-              // The field is empty, removing it because an empty array field 
may be misleading
-              // for the recipients of the truncated response context.
-              contextJsonNode.remove(fieldName);
-              // Since the field is completely removed (name + value) we need 
to do a recalculation
-              needToRemoveCharsNumber = contextJsonNode.toString().length() - 
maxCharsNumber;
-            }
-          } // node is not an array
-        } else {
-          // A context should not contain nulls so we completely remove the 
field.
+    }
+
+    int needToRemoveCharsNumber = fullSerializedString.length() - 
maxCharsNumber;
+    // Indicates that the context is truncated during serialization.
+    headerMap.put(Keys.TRUNCATED, true);
+    // Account for the extra field just added
+    needToRemoveCharsNumber += Keys.TRUNCATED.getName().length() + 7;

Review comment:
       Added comment to explain. Please check if it is now clear. Didn't quite 
seem worthwhile to define a constant...

##########
File path: 
processing/src/main/java/org/apache/druid/query/context/ResponseContext.java
##########
@@ -30,93 +33,334 @@
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.java.util.common.NonnullPair;
-import org.apache.druid.java.util.common.jackson.JacksonUtils;
 import org.apache.druid.query.SegmentDescriptor;
+import org.apache.druid.utils.CollectionUtils;
 import org.joda.time.Interval;
 
 import javax.annotation.Nullable;
+
 import java.io.IOException;
 import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
 import java.util.Comparator;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.ConcurrentSkipListMap;
-import java.util.function.BiFunction;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.stream.Collectors;
 
 /**
  * The context for storing and passing data between chains of {@link 
org.apache.druid.query.QueryRunner}s.
  * The context is also transferred between Druid nodes with all the data it 
contains.
- */
+ * <p>
+ * The response context consists of a set of key/value pairs. Keys are those 
defined in
+ * the {@code Keys} registry. Keys are indexed by key instance, not by name. 
The
+ * key defines the type of the associated value, including logic to merge 
values and
+ * to deserialize JSON values for that key.
+ *
+ * <h4>Structure</h4>
+ * The context has evolved to perform multiple tasks. First, it holds two kinds
+ * of information:
+ * <ul>
+ * <li>Information to be returned in the query response header.
+ * (These are values tagged as {@code HEADER}.)</li>
+ * <li>Values passed within a single server. These are tagged with
+ * visibility {@code NONE}.)</li>
+ * </ul>
+ * Second, it performs multiple tasks:
+ * <ul>
+ * <li>Registers the keys to be used in the header. But, since it also holds
+ * internal information, the internal information also needs keys, though the
+ * corresponding values are never serialized.</li>
+ * <li>Gathers information for the query as a whole.</li>
+ * <li>Merges information back up the query tree: from multiple segments,
+ * from multiple servers, etc.</li>
+ * <li>Manages headers size by dropping fields when the header would get too
+ * large.</li>
+ * </ul>
+ *
+ * A result is that the information the context, when inspected by a calling
+ * query, may be incomplete if some of it was previously dropped by the
+ * called query.
+ *
+ * <h4>API</h4>
+ *
+ * The query profile needs to obtain the full, untruncated information. To do 
this
+ * it piggy-backs on the set operations to obtain the full value. To ensure 
this
+ * is possible, code that works with standard values should call the set (or 
add)
+ * functions provided which will do the needed map update.
+  */
 @PublicApi
 public abstract class ResponseContext
 {
   /**
    * The base interface of a response context key.
    * Should be implemented by every context key.
    */
-  public interface BaseKey
+  public interface Key
   {
+    /**
+     * Hack, pure and simple. The symbol "ResponseContext.Key.ETAG" must exist
+     * to avoid changing a line of code where we have no tests, which causes
+     * the build to fail. Remove this once the proper tests are added.
+     *
+     * @see {@link org.apache.druid.server.QueryResource#doPost}
+     */
+    public static final Key ETAG = Keys.ETAG;

Review comment:
       @jihoonson, I agree, this is super ugly. But,  how do I ignore the bot? 
The first-phase build fails and I can't get the PR onto the more important 
second phase. Is there some magic comment I can add to tell that particular 
auto-nagger that there's nothing to see here?

##########
File path: server/src/main/java/org/apache/druid/server/QueryResource.java
##########
@@ -214,6 +216,9 @@ public Response doPost(
       final Sequence<?> results = queryResponse.getResults();
       final ResponseContext responseContext = 
queryResponse.getResponseContext();
       final String prevEtag = getPreviousEtag(req);
+      // In the following line, responseContext.get(ResponseContext.Key.ETAG) 
should be
+      // replaced with responseContext.getEntityTag(). But, we have no tests 
for one of the
+      // code paths, so a build check fails, meaning that the change can't be 
made at this time.

Review comment:
       The PR code was originally as shown in the comment. However the build 
failed because the coverage checker noticed this line changed and pointed out 
an existing lack of coverage. When asking around, the answer seemed to be "add 
the required test." Since I only changed the accessor, I didn't want to have to 
figure out how to test this area, so added this hack. What is a better 
solution? Or, who knows enough about this area to suggest the test that needs 
to be added?

##########
File path: 
processing/src/main/java/org/apache/druid/query/context/ResponseContext.java
##########
@@ -30,93 +33,334 @@
 import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.java.util.common.NonnullPair;
-import org.apache.druid.java.util.common.jackson.JacksonUtils;
 import org.apache.druid.query.SegmentDescriptor;
+import org.apache.druid.utils.CollectionUtils;
 import org.joda.time.Interval;
 
 import javax.annotation.Nullable;
+
 import java.io.IOException;
 import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
 import java.util.Comparator;
 import java.util.List;
 import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.ConcurrentSkipListMap;
-import java.util.function.BiFunction;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.stream.Collectors;
 
 /**
  * The context for storing and passing data between chains of {@link 
org.apache.druid.query.QueryRunner}s.
  * The context is also transferred between Druid nodes with all the data it 
contains.
- */
+ * <p>
+ * The response context consists of a set of key/value pairs. Keys are those 
defined in
+ * the {@code Keys} registry. Keys are indexed by key instance, not by name. 
The
+ * key defines the type of the associated value, including logic to merge 
values and
+ * to deserialize JSON values for that key.
+ *
+ * <h4>Structure</h4>
+ * The context has evolved to perform multiple tasks. First, it holds two kinds
+ * of information:
+ * <ul>
+ * <li>Information to be returned in the query response header.
+ * (These are values tagged as {@code HEADER}.)</li>
+ * <li>Values passed within a single server. These are tagged with
+ * visibility {@code NONE}.)</li>
+ * </ul>
+ * Second, it performs multiple tasks:
+ * <ul>
+ * <li>Registers the keys to be used in the header. But, since it also holds
+ * internal information, the internal information also needs keys, though the
+ * corresponding values are never serialized.</li>
+ * <li>Gathers information for the query as a whole.</li>
+ * <li>Merges information back up the query tree: from multiple segments,
+ * from multiple servers, etc.</li>
+ * <li>Manages headers size by dropping fields when the header would get too
+ * large.</li>
+ * </ul>
+ *
+ * A result is that the information the context, when inspected by a calling
+ * query, may be incomplete if some of it was previously dropped by the
+ * called query.
+ *
+ * <h4>API</h4>
+ *
+ * The query profile needs to obtain the full, untruncated information. To do 
this
+ * it piggy-backs on the set operations to obtain the full value. To ensure 
this
+ * is possible, code that works with standard values should call the set (or 
add)
+ * functions provided which will do the needed map update.
+  */
 @PublicApi
 public abstract class ResponseContext
 {
   /**
    * The base interface of a response context key.
    * Should be implemented by every context key.
    */
-  public interface BaseKey
+  public interface Key
   {
+    /**
+     * Hack, pure and simple. The symbol "ResponseContext.Key.ETAG" must exist
+     * to avoid changing a line of code where we have no tests, which causes
+     * the build to fail. Remove this once the proper tests are added.
+     *
+     * @see {@link org.apache.druid.server.QueryResource#doPost}
+     */
+    public static final Key ETAG = Keys.ETAG;
+
     @JsonValue
     String getName();
+
+    /**
+     * The phase (header, trailer, none) where this key is emitted.
+     */
+    Visibility getPhase();
+
     /**
-     * Merge function associated with a key: Object (Object oldValue, Object 
newValue)
+     * Reads a value of this key from a JSON stream. Used by {@link 
ResponseContextDeserializer}.
+     */
+    Object readValue(JsonParser jp);
+
+    /**
+     * Merges two values of type T.
+     *
+     * This method may modify "oldValue" but must not modify "newValue".
      */
-    BiFunction<Object, Object, Object> getMergeFunction();
+    Object mergeValues(Object oldValue, Object newValue);
+
+    /**
+     * Returns true if this key can be removed to reduce header size when the
+     * header would otherwise be too large.
+     */
+    @JsonIgnore
+    boolean canDrop();
+  }
+
+  /**
+   * Where the key is emitted, if at all. Values in the context can be for 
internal
+   * use only: for return before the query results (header) or only after query
+   * results (trailer).
+   */
+  public enum Visibility
+  {
+    /**
+     * Keys that are present in both the "X-Druid-Response-Context" header 
*and* the response context trailer.
+     */
+    HEADER,

Review comment:
       That's a good question for the next PR. Here, it seems that, in picking 
out this set of changes, I picked the wrong comment. Fixed that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to