This is an automated email from the ASF dual-hosted git repository.

gian pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 267b32c2e2 Set druid.processing.fifo to true by default (#12571)
267b32c2e2 is described below

commit 267b32c2e2c283ea0602a3b080df3ae553c73683
Author: Suneet Saldanha <[email protected]>
AuthorDate: Mon Aug 8 10:18:24 2022 -0700

    Set druid.processing.fifo to true by default (#12571)
---
 docs/configuration/index.md                                       | 8 ++++----
 .../main/java/org/apache/druid/query/DruidProcessingConfig.java   | 2 +-
 .../java/org/apache/druid/query/DruidProcessingConfigTest.java    | 8 ++++----
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 38fb94c3f6..dee540f399 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -1393,7 +1393,7 @@ Processing properties set on the Middlemanager will be 
passed through to Peons.
 |`druid.processing.numMergeBuffers`|The number of direct memory buffers 
available for merging query results. The buffers are sized by 
`druid.processing.buffer.sizeBytes`. This property is effectively a concurrency 
limit for queries that require merging buffers. If you are using any queries 
that require merge buffers (currently, just groupBy v2) then you should have at 
least two of these.|`max(2, druid.processing.numThreads / 4)`|
 |`druid.processing.numThreads`|The number of processing threads to have 
available for parallel processing of segments. Our rule of thumb is `num_cores 
- 1`, which means that even under heavy load there will still be one core 
available to do background tasks like talking with ZooKeeper and pulling down 
segments. If only one core is available, this property defaults to the value 
`1`.|Number of cores - 1 (or 1)|
 |`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the 
dimension value lookup cache. Any value greater than `0` enables the cache. It 
is currently disabled by default. Enabling the lookup cache can significantly 
improve the performance of aggregators operating on dimension values, such as 
the JavaScript aggregator, or cardinality aggregator, but can slow things down 
if the cache hit rate is low (i.e. dimensions with few repeating values). 
Enabling it may also require add [...]
-|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`false`|
+|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`true`|
 |`druid.processing.tmpDir`|Path where temporary files created while processing 
a query should be stored. If specified, this configuration takes priority over 
the default `java.io.tmpdir` path.|path represented by `java.io.tmpdir`|
 |`druid.processing.intermediaryData.storage.type`|Storage type for storing 
intermediary segments of data shuffle between native parallel index tasks. 
Current choices are "local" which stores segment files in local storage of 
Middle Managers (or Indexer) or "deepstore" which uses configured deep storage. 
Note - With "deepstore" type data is stored in `shuffle-data` directory under 
the configured deep storage path, auto clean up for this directory is not 
supported yet. One can setup cloud  [...]
 
@@ -1541,7 +1541,7 @@ Druid uses Jetty to serve HTTP requests.
 |`druid.processing.numMergeBuffers`|The number of direct memory buffers 
available for merging query results. The buffers are sized by 
`druid.processing.buffer.sizeBytes`. This property is effectively a concurrency 
limit for queries that require merging buffers. If you are using any queries 
that require merge buffers (currently, just groupBy v2) then you should have at 
least two of these.|`max(2, druid.processing.numThreads / 4)`|
 |`druid.processing.numThreads`|The number of processing threads to have 
available for parallel processing of segments. Our rule of thumb is `num_cores 
- 1`, which means that even under heavy load there will still be one core 
available to do background tasks like talking with ZooKeeper and pulling down 
segments. If only one core is available, this property defaults to the value 
`1`.|Number of cores - 1 (or 1)|
 |`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the 
dimension value lookup cache. Any value greater than `0` enables the cache. It 
is currently disabled by default. Enabling the lookup cache can significantly 
improve the performance of aggregators operating on dimension values, such as 
the JavaScript aggregator, or cardinality aggregator, but can slow things down 
if the cache hit rate is low (i.e. dimensions with few repeating values). 
Enabling it may also require add [...]
-|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`false`|
+|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`true`|
 |`druid.processing.tmpDir`|Path where temporary files created while processing 
a query should be stored. If specified, this configuration takes priority over 
the default `java.io.tmpdir` path.|path represented by `java.io.tmpdir`|
 
 The amount of direct memory needed by Druid is at least
@@ -1651,7 +1651,7 @@ Druid uses Jetty to serve HTTP requests.
 |`druid.processing.numMergeBuffers`|The number of direct memory buffers 
available for merging query results. The buffers are sized by 
`druid.processing.buffer.sizeBytes`. This property is effectively a concurrency 
limit for queries that require merging buffers. If you are using any queries 
that require merge buffers (currently, just groupBy v2) then you should have at 
least two of these.|`max(2, druid.processing.numThreads / 4)`|
 |`druid.processing.numThreads`|The number of processing threads to have 
available for parallel processing of segments. Our rule of thumb is `num_cores 
- 1`, which means that even under heavy load there will still be one core 
available to do background tasks like talking with ZooKeeper and pulling down 
segments. If only one core is available, this property defaults to the value 
`1`.|Number of cores - 1 (or 1)|
 |`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the 
dimension value lookup cache. Any value greater than `0` enables the cache. It 
is currently disabled by default. Enabling the lookup cache can significantly 
improve the performance of aggregators operating on dimension values, such as 
the JavaScript aggregator, or cardinality aggregator, but can slow things down 
if the cache hit rate is low (i.e. dimensions with few repeating values). 
Enabling it may also require add [...]
-|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`false`|
+|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`true`|
 |`druid.processing.tmpDir`|Path where temporary files created while processing 
a query should be stored. If specified, this configuration takes priority over 
the default `java.io.tmpdir` path.|path represented by `java.io.tmpdir`|
 
 The amount of direct memory needed by Druid is at least
@@ -1815,7 +1815,7 @@ The broker uses processing configs for nested groupBy 
queries.
 |`druid.processing.buffer.poolCacheMaxCount`|processing buffer pool caches the 
buffers for later use, this is the maximum count cache will grow to. note that 
pool can create more buffers than it can cache if necessary.|Integer.MAX_VALUE|
 |`druid.processing.numMergeBuffers`|The number of direct memory buffers 
available for merging query results. The buffers are sized by 
`druid.processing.buffer.sizeBytes`. This property is effectively a concurrency 
limit for queries that require merging buffers. If you are using any queries 
that require merge buffers (currently, just groupBy v2) then you should have at 
least two of these.|`max(2, druid.processing.numThreads / 4)`|
 |`druid.processing.columnCache.sizeBytes`|Maximum size in bytes for the 
dimension value lookup cache. Any value greater than `0` enables the cache. It 
is currently disabled by default. Enabling the lookup cache can significantly 
improve the performance of aggregators operating on dimension values, such as 
the JavaScript aggregator, or cardinality aggregator, but can slow things down 
if the cache hit rate is low (i.e. dimensions with few repeating values). 
Enabling it may also require add [...]
-|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`false`|
+|`druid.processing.fifo`|If the processing queue should treat tasks of equal 
priority in a FIFO manner|`true`|
 |`druid.processing.tmpDir`|Path where temporary files created while processing 
a query should be stored. If specified, this configuration takes priority over 
the default `java.io.tmpdir` path.|path represented by `java.io.tmpdir`|
 |`druid.processing.merge.useParallelMergePool`|Enable automatic parallel 
merging for Brokers on a dedicated async ForkJoinPool. If `false`, instead 
merges will be done serially on the `HTTP` thread pool.|`true`|
 |`druid.processing.merge.pool.parallelism`|Size of ForkJoinPool. Note that the 
default configuration assumes that the value returned by 
`Runtime.getRuntime().availableProcessors()` represents 2 hyper-threads per 
physical core, and multiplies this value by `0.75` in attempt to size `1.5` 
times the number of _physical_ 
cores.|`Runtime.getRuntime().availableProcessors() * 0.75` (rounded up)|
diff --git 
a/processing/src/main/java/org/apache/druid/query/DruidProcessingConfig.java 
b/processing/src/main/java/org/apache/druid/query/DruidProcessingConfig.java
index 2fd22cb32d..d8e4cd731b 100644
--- a/processing/src/main/java/org/apache/druid/query/DruidProcessingConfig.java
+++ b/processing/src/main/java/org/apache/druid/query/DruidProcessingConfig.java
@@ -152,7 +152,7 @@ public abstract class DruidProcessingConfig extends 
ExecutorServiceConfig implem
   @Config(value = "${base_path}.fifo")
   public boolean isFifo()
   {
-    return false;
+    return true;
   }
 
   @Config(value = "${base_path}.tmpDir")
diff --git 
a/processing/src/test/java/org/apache/druid/query/DruidProcessingConfigTest.java
 
b/processing/src/test/java/org/apache/druid/query/DruidProcessingConfigTest.java
index c5888b75e5..32aa9db396 100644
--- 
a/processing/src/test/java/org/apache/druid/query/DruidProcessingConfigTest.java
+++ 
b/processing/src/test/java/org/apache/druid/query/DruidProcessingConfigTest.java
@@ -91,7 +91,7 @@ public class DruidProcessingConfigTest
     Assert.assertEquals(NUM_PROCESSORS - 1, config.getNumThreads());
     Assert.assertEquals(Math.max(2, config.getNumThreads() / 4), 
config.getNumMergeBuffers());
     Assert.assertEquals(0, config.columnCacheSizeBytes());
-    Assert.assertFalse(config.isFifo());
+    Assert.assertTrue(config.isFifo());
     Assert.assertEquals(System.getProperty("java.io.tmpdir"), 
config.getTmpDir());
     Assert.assertEquals(BUFFER_SIZE, config.intermediateComputeSizeBytes());
   }
@@ -106,7 +106,7 @@ public class DruidProcessingConfigTest
     Assert.assertTrue(config.getNumThreads() == 1);
     Assert.assertEquals(Math.max(2, config.getNumThreads() / 4), 
config.getNumMergeBuffers());
     Assert.assertEquals(0, config.columnCacheSizeBytes());
-    Assert.assertFalse(config.isFifo());
+    Assert.assertTrue(config.isFifo());
     Assert.assertEquals(System.getProperty("java.io.tmpdir"), 
config.getTmpDir());
     Assert.assertEquals(BUFFER_SIZE, config.intermediateComputeSizeBytes());
   }
@@ -132,7 +132,7 @@ public class DruidProcessingConfigTest
     props.setProperty("druid.processing.buffer.poolCacheMaxCount", "1");
     props.setProperty("druid.processing.numThreads", "256");
     props.setProperty("druid.processing.columnCache.sizeBytes", "1");
-    props.setProperty("druid.processing.fifo", "true");
+    props.setProperty("druid.processing.fifo", "false");
     props.setProperty("druid.processing.tmpDir", "/test/path");
 
 
@@ -150,7 +150,7 @@ public class DruidProcessingConfigTest
     Assert.assertEquals(256, config.getNumThreads());
     Assert.assertEquals(64, config.getNumMergeBuffers());
     Assert.assertEquals(1, config.columnCacheSizeBytes());
-    Assert.assertTrue(config.isFifo());
+    Assert.assertFalse(config.isFifo());
     Assert.assertEquals("/test/path", config.getTmpDir());
     Assert.assertEquals(0, config.getNumInitalBuffersForIntermediatePool());
   }


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to