jon-wei commented on a change in pull request #7133: 6088 - Time Ordering On 
Scans
URL: https://github.com/apache/incubator-druid/pull/7133#discussion_r267144903
 
 

 ##########
 File path: 
processing/src/main/java/org/apache/druid/query/scan/ScanQueryRunnerFactory.java
 ##########
 @@ -68,34 +81,117 @@ public ScanQueryRunnerFactory(
   )
   {
     // in single thread and in jetty thread instead of processing thread
-    return new QueryRunner<ScanResultValue>()
-    {
-      @Override
-      public Sequence<ScanResultValue> run(
-          final QueryPlus<ScanResultValue> queryPlus, final Map<String, 
Object> responseContext
-      )
-      {
-        // Note: this variable is effective only when queryContext has a 
timeout.
-        // See the comment of CTX_TIMEOUT_AT.
-        final long timeoutAt = System.currentTimeMillis() + 
QueryContexts.getTimeout(queryPlus.getQuery());
-        responseContext.put(CTX_TIMEOUT_AT, timeoutAt);
+    return (queryPlus, responseContext) -> {
+      ScanQuery query = (ScanQuery) queryPlus.getQuery();
+      int numSegments = 0;
+      final Iterator<QueryRunner<ScanResultValue>> segmentIt = 
queryRunners.iterator();
+      for (; segmentIt.hasNext(); numSegments++) {
+        segmentIt.next();
+      }
+      // Note: this variable is effective only when queryContext has a timeout.
+      // See the comment of CTX_TIMEOUT_AT.
+      final long timeoutAt = System.currentTimeMillis() + 
QueryContexts.getTimeout(queryPlus.getQuery());
+      responseContext.put(CTX_TIMEOUT_AT, timeoutAt);
+      if (query.getOrder().equals(ScanQuery.Order.NONE)) {
+        // Use normal strategy
         return Sequences.concat(
             Sequences.map(
                 Sequences.simple(queryRunners),
-                new Function<QueryRunner<ScanResultValue>, 
Sequence<ScanResultValue>>()
-                {
-                  @Override
-                  public Sequence<ScanResultValue> apply(final 
QueryRunner<ScanResultValue> input)
-                  {
-                    return input.run(queryPlus, responseContext);
-                  }
-                }
+                input -> input.run(queryPlus, responseContext)
             )
         );
+      } else if (query.getLimit() <= 
scanQueryConfig.getMaxRowsQueuedForTimeOrdering()) {
+        // Use priority queue strategy
+        return sortAndLimitScanResultValues(
+            Sequences.concat(Sequences.map(
+                Sequences.simple(queryRunners),
+                input -> input.run(queryPlus, responseContext)
+            )),
+            query
+        );
+      } else if (numSegments <= 
scanQueryConfig.getMaxSegmentsTimeOrderedInMemory()) {
 
 Review comment:
   I think this is considering too large of a set of segments, suppose I had 
the following hourly granularity segments (with 2 shards per hour):
   
   ```
   
wikipedia_2016-06-27T01:00:00.000Z_2016-06-27T02:00:00.000Z_2019-03-19T21:38:31.492Z
   
wikipedia_2016-06-27T01:00:00.000Z_2016-06-27T02:00:00.000Z_2019-03-19T21:38:31.492Z_1
   
wikipedia_2016-06-27T02:00:00.000Z_2016-06-27T03:00:00.000Z_2019-03-19T21:38:31.492Z
   
wikipedia_2016-06-27T02:00:00.000Z_2016-06-27T03:00:00.000Z_2019-03-19T21:38:31.492Z_1
   ```
   
   You could create two subsequences, one per hour:
   
   ```
   
wikipedia_2016-06-27T01:00:00.000Z_2016-06-27T02:00:00.000Z_2019-03-19T21:38:31.492Z
   
wikipedia_2016-06-27T01:00:00.000Z_2016-06-27T02:00:00.000Z_2019-03-19T21:38:31.492Z_1
   ```
   and 
   
   ```
   
wikipedia_2016-06-27T02:00:00.000Z_2016-06-27T03:00:00.000Z_2019-03-19T21:38:31.492Z
   
wikipedia_2016-06-27T02:00:00.000Z_2016-06-27T03:00:00.000Z_2019-03-19T21:38:31.492Z_1
   ```
   
   where `maxSegmentsTimeOrderedInMemory` applies to each subsequence (you only 
need to simultaneously read segment files that are partitions for the same time 
period)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to