LakshSingla commented on code in PR #16620:
URL: https://github.com/apache/druid/pull/16620#discussion_r1650388068
##########
processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java:
##########
@@ -1371,6 +1361,77 @@ public Grouper.BufferComparator
bufferComparatorWithAggregators(
);
}
+ @Override
+ public ObjectMapper decorateObjectMapper(ObjectMapper spillMapper)
+ {
+
+ final JsonDeserializer<RowBasedKey> deserializer = new
JsonDeserializer<RowBasedKey>()
+ {
+ @Override
+ public RowBasedKey deserialize(
+ JsonParser jp,
+ DeserializationContext deserializationContext
+ ) throws IOException
+ {
+ if (!jp.isExpectedStartArrayToken()) {
+ throw DruidException.defensive("Expected array start token,
received [%s]", jp.getCurrentToken());
+ }
+ jp.nextToken();
+
+ final ObjectCodec codec = jp.getCodec();
+ final int timestampAdjustment = includeTimestamp ? 1 : 0;
+ final int dimsToRead = timestampAdjustment + serdeHelpers.length;
+ int dimsReadSoFar = 0;
+ final Object[] objects = new Object[dimsToRead];
+
+ while (jp.currentToken() != JsonToken.END_ARRAY) {
+ if (dimsReadSoFar >= dimsToRead) {
+ throw DruidException.defensive("More dimensions encountered than
expected [%d]", dimsToRead);
+ }
+
+ if (includeTimestamp && dimsReadSoFar == 0) {
+ // Read the timestamp
+ objects[dimsReadSoFar] = codec.readValue(jp, Long.class);
+ } else {
Review Comment:
I tried this, but the complexity is the same, given that we'd have to
duplicate the defensive checks, loop counter increment and json stream
increment. I have left it as is for now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]