[druid] branch master updated: Fix a resource leak with Window processing (#14573)

2023-07-12 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 65e1b27aa7 Fix a resource leak with Window processing (#14573)
65e1b27aa7 is described below

commit 65e1b27aa709dc3e11ae75993656243377744666
Author: imply-cheddar <86940447+imply-ched...@users.noreply.github.com>
AuthorDate: Thu Jul 13 07:25:42 2023 +0900

Fix a resource leak with Window processing (#14573)

* Fix a resource leak with Window processing

Additionally, in order to find the leak, there were
adjustments to the StupidPool to track leaks a bit better.
It would appear that the pool objects get GC'd during testing
for some reason which was causing some incorrect identification
of leaks from objects that had been returned but were GC'd along
with the pool.

* Suppress unused warning
---
 .../org/apache/druid/collections/StupidPool.java   |  62 ++---
 .../query/operator/LimitTimeIntervalOperator.java  |  20 ++---
 .../WindowOperatorQueryQueryRunnerFactory.java |  53 ++-
 .../rowsandcols/LazilyDecoratedRowsAndColumns.java | 100 +
 .../druid/query/rowsandcols/RowsAndColumns.java|  31 +++
 .../druid/query/rowsandcols/SemanticCreator.java   |  37 
 .../concrete/QueryableIndexRowsAndColumns.java |  14 ++-
 .../semantic/DefaultNaiveSortMaker.java|   9 +-
 .../org/apache/druid/segment/IndexBuilder.java |   3 +-
 .../druid/sql/calcite/DrillWindowQueryTest.java|  18 +++-
 10 files changed, 275 insertions(+), 72 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/collections/StupidPool.java 
b/processing/src/main/java/org/apache/druid/collections/StupidPool.java
index ced36e3a9d..06536c5d33 100644
--- a/processing/src/main/java/org/apache/druid/collections/StupidPool.java
+++ b/processing/src/main/java/org/apache/druid/collections/StupidPool.java
@@ -24,12 +24,13 @@ import com.google.common.base.Preconditions;
 import com.google.common.base.Supplier;
 import org.apache.druid.java.util.common.Cleaners;
 import org.apache.druid.java.util.common.ISE;
-import org.apache.druid.java.util.common.RE;
+import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.java.util.common.logger.Logger;
 
 import java.lang.ref.WeakReference;
 import java.util.Queue;
 import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.CopyOnWriteArrayList;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.concurrent.atomic.AtomicReference;
@@ -105,7 +106,8 @@ public class StupidPool implements NonBlockingPool
   private final AtomicLong createdObjectsCounter = new AtomicLong(0);
   private final AtomicLong leakedObjectsCounter = new AtomicLong(0);
 
-  private final AtomicReference capturedException = new 
AtomicReference<>(null);
+  private final AtomicReference> 
capturedException =
+  new AtomicReference<>(null);
 
   //note that this is just the max entries in the cache, pool can still create 
as many buffers as needed.
   private final int objectsCacheMaxCount;
@@ -149,30 +151,41 @@ public class StupidPool implements NonBlockingPool
 ObjectResourceHolder resourceHolder = objects.poll();
 if (resourceHolder == null) {
   if (POISONED.get() && capturedException.get() != null) {
-throw capturedException.get();
+throw makeExceptionForLeaks(capturedException.get());
   }
   return makeObjectWithHandler();
 } else {
   poolSize.decrementAndGet();
   if (POISONED.get()) {
-final RuntimeException exception = capturedException.get();
-if (exception == null) {
-  resourceHolder.notifier.except = new RE("Thread[%s]: leaky leak!", 
Thread.currentThread().getName());
+final CopyOnWriteArrayList exceptionList = 
capturedException.get();
+if (exceptionList == null) {
+  resourceHolder.notifier.except = new 
LeakedException(Thread.currentThread().getName());
 } else {
-  throw exception;
+  throw makeExceptionForLeaks(exceptionList);
 }
   }
   return resourceHolder;
 }
   }
 
+  private RuntimeException 
makeExceptionForLeaks(CopyOnWriteArrayList exceptionList)
+  {
+RuntimeException toThrow = new RuntimeException(
+"Leaks happened, each suppressed exception represents one code path 
that checked out an object and didn't return it."
+);
+for (LeakedException exception : exceptionList) {
+  toThrow.addSuppressed(exception);
+}
+return toThrow;
+  }
+
   private ObjectResourceHolder makeObjectWithHandler()
   {
 T object = generator.get();
 createdObjectsCounter.incrementAndGet();
 ObjectId o

[druid] branch master updated: Add a configurable bufferPeriod between when a segment is marked unused and deleted by KillUnusedSegments duty (#12599)

2023-08-17 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c124f2cde Add a configurable bufferPeriod between when a segment is 
marked unused and deleted by KillUnusedSegments duty (#12599)
9c124f2cde is described below

commit 9c124f2cde074d268f95ccf5989f548e495237b0
Author: Lucas Capistrant 
AuthorDate: Thu Aug 17 19:32:51 2023 -0500

Add a configurable bufferPeriod between when a segment is marked unused and 
deleted by KillUnusedSegments duty (#12599)

* Add new configurable buffer period to create gap between mark unused and 
kill of segment

* Changes after testing

* fixes and improvements

* changes after initial self review

* self review changes

* update sql statement that was lacking last_used

* shore up some code in SqlMetadataConnector after self review

* fix derby compatibility and improve testing/docs

* fix checkstyle violations

* Fixes post merge with master

* add some unit tests to improve coverage

* ignore test coverage on new UpdateTools cli tool

* another attempt to ignore UpdateTables in coverage check

* change column name to used_flag_last_updated

* fix a method signature after column name switch

* update docs spelling

* Update spelling dictionary

* Fixing up docs/spelling and integrating altering tasks table with my 
alteration code

* Update NULL values for used_flag_last_updated in the background

* Remove logic to allow segs with null used_flag_last_updated to be killed 
regardless of bufferPeriod

* remove unneeded things now that the new column is automatically updated

* Test new background row updater method

* fix broken tests

* fix create table statement

* cleanup DDL formatting

* Revert adding columns to entry table by default

* fix compilation issues after merge with master

* discovered and fixed metastore inserts that were breaking integration 
tests

* fixup forgotten insert by using pattern of sharing now timestamp across 
columns

* fix issue introduced by merge

* fixup after merge with master

* add some directions to docs in the case of segment table validation issues
---
 docs/configuration/index.md|   1 +
 docs/design/metadata-storage.md|   4 +-
 docs/operations/upgrade-prep.md|  71 
 .../test-data/high-availability-sample-data.sql|  10 +-
 .../docker/test-data/ldap-security-sample-data.sql |   2 +-
 .../docker/test-data/query-error-sample-data.sql   |  10 +-
 .../docker/test-data/query-retry-sample-data.sql   |  10 +-
 .../docker/test-data/query-sample-data.sql |  10 +-
 .../docker/test-data/security-sample-data.sql  |   2 +-
 pom.xml|   3 +
 .../druid/metadata/MetadataStorageConnector.java   |   8 +
 .../metadata/TestMetadataStorageConnector.java |   6 +
 .../SQLMetadataStorageUpdaterJobHandler.java   |  11 +-
 .../IndexerSQLMetadataStorageCoordinator.java  |  10 +-
 .../druid/metadata/SQLMetadataConnector.java   | 180 -
 .../metadata/SQLMetadataSegmentPublisher.java  |  14 +-
 .../druid/metadata/SegmentsMetadataManager.java|  20 ++-
 .../druid/metadata/SqlSegmentsMetadataManager.java | 124 +-
 .../druid/metadata/SqlSegmentsMetadataQuery.java   |  11 +-
 .../metadata/storage/derby/DerbyConnector.java |  58 ---
 .../druid/server/coordinator/DruidCoordinator.java |   2 +
 .../server/coordinator/DruidCoordinatorConfig.java |   4 +
 .../coordinator/duty/KillUnusedSegments.java   |   7 +-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |  11 +-
 .../druid/metadata/SQLMetadataConnectorTest.java   |  44 -
 .../metadata/SqlSegmentsMetadataManagerTest.java   | 118 +-
 .../apache/druid/metadata/TestDerbyConnector.java  |  26 +++
 .../coordinator/TestDruidCoordinatorConfig.java|  22 ++-
 .../coordinator/duty/KillUnusedSegmentsTest.java   |   7 +-
 .../simulate/TestSegmentsMetadataManager.java  |  12 +-
 .../src/main/java/org/apache/druid/cli/Main.java   |   3 +-
 .../java/org/apache/druid/cli/UpdateTables.java| 134 +++
 website/.spelling  |   1 +
 33 files changed, 832 insertions(+), 124 deletions(-)

diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 4924fb478f..deb1e7c541 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -858,6 +858,7 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |`druid.coordinator.kill.period`|How often to send kill tasks to

[druid] branch master updated: Skip streaming auto-scaling action if supervisor is idle (#14773)

2023-08-17 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new a8eaa1e4ed Skip streaming auto-scaling action if supervisor is idle 
(#14773)
a8eaa1e4ed is described below

commit a8eaa1e4ed81f94fe53ae14bbb078678e35de105
Author: Jonathan Wei 
AuthorDate: Thu Aug 17 19:43:25 2023 -0500

Skip streaming auto-scaling action if supervisor is idle (#14773)

* Skip streaming auto-scaling action if supervisor is idle

* Update 
indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java

Co-authored-by: Abhishek Radhakrishnan 

-

Co-authored-by: Abhishek Radhakrishnan 
---
 .../supervisor/SeekableStreamSupervisor.java   |  7 +++
 .../SeekableStreamSupervisorSpecTest.java  | 65 ++
 2 files changed, 72 insertions(+)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
index 29fd16d1a4..0d1e32c49b 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/supervisor/SeekableStreamSupervisor.java
@@ -443,6 +443,13 @@ public abstract class 
SeekableStreamSupervisor

[druid] branch master updated: Make RecordSupplierInputSource respect sampler timeout when stream is empty (#13296)

2022-11-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 2fdaa2fcab Make RecordSupplierInputSource respect sampler timeout when 
stream is empty (#13296)
2fdaa2fcab is described below

commit 2fdaa2fcabc7ceb91568ce1e6b1fcede2da7602c
Author: Jonathan Wei 
AuthorDate: Thu Nov 3 17:45:35 2022 -0500

Make RecordSupplierInputSource respect sampler timeout when stream is empty 
(#13296)

* Make RecordSupplierInputSource respect sampler timeout when stream is 
empty

* Rename timeout param, make it nullable, add timeout test
---
 .../seekablestream/RecordSupplierInputSource.java  | 24 -
 .../seekablestream/SeekableStreamSamplerSpec.java  |  6 +++--
 .../overlord/sampler/InputSourceSamplerTest.java   |  2 +-
 .../RecordSupplierInputSourceTest.java | 31 +-
 4 files changed, 58 insertions(+), 5 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
index c387571507..ee54f2ac22 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/seekablestream/RecordSupplierInputSource.java
@@ -31,6 +31,7 @@ import 
org.apache.druid.indexing.overlord.sampler.SamplerException;
 import 
org.apache.druid.indexing.seekablestream.common.OrderedPartitionableRecord;
 import org.apache.druid.indexing.seekablestream.common.RecordSupplier;
 import org.apache.druid.indexing.seekablestream.common.StreamPartition;
+import org.apache.druid.java.util.common.logger.Logger;
 import org.apache.druid.java.util.common.parsers.CloseableIterator;
 
 import javax.annotation.Nullable;
@@ -45,19 +46,28 @@ import java.util.stream.Collectors;
  */
 public class RecordSupplierInputSource extends AbstractInputSource
 {
+  private static final Logger LOG = new 
Logger(RecordSupplierInputSource.class);
+
   private final String topic;
   private final RecordSupplier recordSupplier;
   private final boolean useEarliestOffset;
 
+  /**
+   * Maximum amount of time in which the entity iterator will return results. 
If null, no timeout is applied.
+   */
+  private final Integer iteratorTimeoutMs;
+
   public RecordSupplierInputSource(
   String topic,
   RecordSupplier 
recordSupplier,
-  boolean useEarliestOffset
+  boolean useEarliestOffset,
+  Integer iteratorTimeoutMs
   )
   {
 this.topic = topic;
 this.recordSupplier = recordSupplier;
 this.useEarliestOffset = useEarliestOffset;
+this.iteratorTimeoutMs = iteratorTimeoutMs;
 try {
   assignAndSeek(recordSupplier);
 }
@@ -123,13 +133,24 @@ public class RecordSupplierInputSource> recordIterator;
   private Iterator bytesIterator;
   private volatile boolean closed;
+  private final long createTime = System.currentTimeMillis();
+  private final Long terminationTime = iteratorTimeoutMs != null ? 
createTime + iteratorTimeoutMs : null;
 
   private void waitNextIteratorIfNecessary()
   {
 while (!closed && (bytesIterator == null || !bytesIterator.hasNext())) 
{
   while (!closed && (recordIterator == null || 
!recordIterator.hasNext())) {
+if (terminationTime != null && System.currentTimeMillis() > 
terminationTime) {
+  LOG.info(
+  "Configured sampler timeout [%s] has been exceeded, 
returning without a bytesIterator.",
+  iteratorTimeoutMs
+  );
+  bytesIterator = null;
+  return;
+}
 recordIterator = 
recordSupplier.poll(SeekableStreamSamplerSpec.POLL_TIMEOUT_MS).iterator();
   }
+
   if (!closed) {
 bytesIterator = recordIterator.next().getData().iterator();
   }
@@ -152,6 +173,7 @@ public class RecordSupplierInputSource(
   ioConfig.getStream(),
   recordSupplier,
-  ioConfig.isUseEarliestSequenceNumber()
+  ioConfig.isUseEarliestSequenceNumber(),
+  samplerConfig.getTimeoutMs() <= 0 ? null : 
samplerConfig.getTimeoutMs()
   );
   inputFormat = Preconditions.checkNotNull(
   ioConfig.getInputFormat(),
@@ -173,7 +174,8 @@ public abstract class 
SeekableStreamSamplerSpec inputSource = new RecordSupplierInputSource<>(
   ioConfig.getStream(),
   createRecordSupplier(),
-  ioConfig.isUseEarliestSequenceNumber()
+  ioConfig.isUseEarliestSequenceNumber(),
+  samplerConfig.getTimeoutMs() <= 0 ? null : 
samplerConfig.getTimeoutMs()
   );
   this.entityIterator = inpu

[incubator-druid] branch master updated: SQL: Add "POSITION" function. (#6596)

2018-11-13 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 154b6fb  SQL: Add "POSITION" function. (#6596)
154b6fb is described below

commit 154b6fbcefe1d0b28248a6e5bbfe2a42b08a57ee
Author: Gian Merlino 
AuthorDate: Tue Nov 13 13:39:00 2018 -0800

SQL: Add "POSITION" function. (#6596)

Also add a "fromIndex" argument to the strpos expression function. There
are some -1 and +1 adjustment terms due to the fact that the strpos
expression behaves like Java indexOf (0-indexed), but the POSITION SQL
function is 1-indexed.
---
 .../java/org/apache/druid/math/expr/Function.java  | 15 -
 .../org/apache/druid/math/expr/FunctionTest.java   |  4 ++
 docs/content/misc/math-expr.md |  2 +-
 docs/content/querying/sql.md   |  3 +-
 .../builtin/PositionOperatorConversion.java| 76 ++
 .../sql/calcite/planner/DruidOperatorTable.java|  2 +
 .../sql/calcite/expression/ExpressionsTest.java| 36 ++
 7 files changed, 133 insertions(+), 5 deletions(-)

diff --git a/core/src/main/java/org/apache/druid/math/expr/Function.java 
b/core/src/main/java/org/apache/druid/math/expr/Function.java
index 5ed460a..3521708 100644
--- a/core/src/main/java/org/apache/druid/math/expr/Function.java
+++ b/core/src/main/java/org/apache/druid/math/expr/Function.java
@@ -955,8 +955,8 @@ interface Function
 @Override
 public ExprEval apply(List args, Expr.ObjectBinding bindings)
 {
-  if (args.size() != 2) {
-throw new IAE("Function[%s] needs 2 arguments", name());
+  if (args.size() < 2 || args.size() > 3) {
+throw new IAE("Function[%s] needs 2 or 3 arguments", name());
   }
 
   final String haystack = 
NullHandling.nullToEmptyIfNeeded(args.get(0).eval(bindings).asString());
@@ -965,7 +965,16 @@ interface Function
   if (haystack == null || needle == null) {
 return ExprEval.of(null);
   }
-  return ExprEval.of(haystack.indexOf(needle));
+
+  final int fromIndex;
+
+  if (args.size() >= 3) {
+fromIndex = args.get(2).eval(bindings).asInt();
+  } else {
+fromIndex = 0;
+  }
+
+  return ExprEval.of(haystack.indexOf(needle, fromIndex));
 }
   }
 
diff --git a/core/src/test/java/org/apache/druid/math/expr/FunctionTest.java 
b/core/src/test/java/org/apache/druid/math/expr/FunctionTest.java
index 8f2d3bf..bc04283 100644
--- a/core/src/test/java/org/apache/druid/math/expr/FunctionTest.java
+++ b/core/src/test/java/org/apache/druid/math/expr/FunctionTest.java
@@ -95,6 +95,10 @@ public class FunctionTest
   public void testStrpos()
   {
 assertExpr("strpos(x, 'o')", 1L);
+assertExpr("strpos(x, 'o', 0)", 1L);
+assertExpr("strpos(x, 'o', 1)", 1L);
+assertExpr("strpos(x, 'o', 2)", 2L);
+assertExpr("strpos(x, 'o', 3)", -1L);
 assertExpr("strpos(x, '')", 0L);
 assertExpr("strpos(x, 'x')", -1L);
   }
diff --git a/docs/content/misc/math-expr.md b/docs/content/misc/math-expr.md
index 321f8fb..a798d0b 100644
--- a/docs/content/misc/math-expr.md
+++ b/docs/content/misc/math-expr.md
@@ -71,7 +71,7 @@ The following built-in functions are available.
 |replace|replace(expr, pattern, replacement) replaces pattern with replacement|
 |substring|substring(expr, index, length) behaves like java.lang.String's 
substring|
 |strlen|strlen(expr) returns length of a string in UTF-16 code units|
-|strpos|strpos(haystack, needle) returns the position of the needle within the 
haystack, with indexes starting from 0. If the needle is not found then the 
function returns -1.|
+|strpos|strpos(haystack, needle[, fromIndex]) returns the position of the 
needle within the haystack, with indexes starting from 0. The search will begin 
at fromIndex, or 0 if fromIndex is not specified. If the needle is not found 
then the function returns -1.|
 |trim|trim(expr[, chars]) remove leading and trailing characters from `expr` 
if they are present in `chars`. `chars` defaults to ' ' (space) if not 
provided.|
 |ltrim|ltrim(expr[, chars]) remove leading characters from `expr` if they are 
present in `chars`. `chars` defaults to ' ' (space) if not provided.|
 |rtrim|rtrim(expr[, chars]) remove trailing characters from `expr` if they are 
present in `chars`. `chars` defaults to ' ' (space) if not provided.|
diff --git a/docs/content/querying/sql.md b/docs/content/querying/sql.md
index e1fb2e4..f996c62 100644
--- a/docs/content/querying/sql.md
+++ b/docs/content/querying/sql.md
@@ -156,9 +156,10 @@ String functions accept strings, 

[incubator-druid] branch master updated: Support DogStatsD style tags in statsd-emitter (#6605)

2018-11-19 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new e0d1dc5  Support DogStatsD style tags in statsd-emitter  (#6605)
e0d1dc5 is described below

commit e0d1dc58465f13d425fb5798fd6f41d8995b
Author: Deiwin Sarjas 
AuthorDate: Mon Nov 19 19:47:57 2018 +0200

Support DogStatsD style tags in statsd-emitter  (#6605)

* Replace StatsD client library

The [Datadog package][1] is a StatsD compatible drop-in replacement for the
client library, but it seems to be [better maintained][2] and has support 
for
Datadog DogStatsD specific features, which will be made use of in a 
subsequent
commit.

The `count`, `time`, and `gauge` methods are actually exactly compatible 
with
the previous library and the modifications shouldn't be required, but 
EasyMock
seems to have a hard time dealing with the variable arguments added by the
DogStatsD library and causes tests to fail if no arguments are provided for 
the
last String vararg. Passing an empty array fixes the test failures.

[1]: https://github.com/DataDog/java-dogstatsd-client
[2]: 
https://github.com/tim-group/java-statsd-client/issues/37#issuecomment-248698856

* Retain dimension key information for StatsD metrics

This doesn't change behavior, but allows separating dimensions from the 
metric
name in subsequent commits.

There is a possible order change for values from
`dimsBuilder.build().values()`, but from the tests it looks like it doesn't
affect actual behavior and the order of user dimensions is also retained.

* Support DogStatsD style tags in statsd-emitter

Datadog [doesn't support name-encoded dimensions and uses a concept of 
_tags_
instead.][1] This change allows Datadog users to send the metrics without
having to encode the various dimensions in the metric names. This enables
building graphs and monitors with and without aggregation across various
dimensions from the same data.

As tests in this commit verify, the behavior remains the same for users who
don't enable the `druid.emitter.statsd.dogstatsd` configuration flag.

[1]: 
https://www.datadoghq.com/blog/the-power-of-tagged-metrics/#tags-decouple-collection-and-reporting

* Disable convertRange behavior for DogStatsD users

DogStatsD, unlike regular StatsD, supports floating-point values, so this
behavior is unnecessary. It would be possible to still support 
`convertRange`,
even with `dogstatsd` enabled, but that would mean that people using the
default mapping would have some of the gauges unnecessarily converted.

`time` is in milliseconds and doesn't support floating-point values.
---
 .../development/extensions-contrib/statsd.md   |  1 +
 extensions-contrib/statsd-emitter/pom.xml  |  6 +-
 .../druid/emitter/statsd/DimensionConverter.java   |  6 +-
 .../apache/druid/emitter/statsd/StatsDEmitter.java | 74 +-
 .../druid/emitter/statsd/StatsDEmitterConfig.java  | 18 +-
 .../emitter/statsd/DimensionConverterTest.java | 10 +--
 .../druid/emitter/statsd/StatsDEmitterTest.java| 64 ---
 7 files changed, 142 insertions(+), 37 deletions(-)

diff --git a/docs/content/development/extensions-contrib/statsd.md 
b/docs/content/development/extensions-contrib/statsd.md
index aa89af9..5a150bf 100644
--- a/docs/content/development/extensions-contrib/statsd.md
+++ b/docs/content/development/extensions-contrib/statsd.md
@@ -44,6 +44,7 @@ All the configuration parameters for the StatsD emitter are 
under `druid.emitter
 |`druid.emitter.statsd.includeHost`|Flag to include the hostname as part of 
the metric name.|no|false|  
 |`druid.emitter.statsd.dimensionMapPath`|JSON file defining the StatsD type, 
and desired dimensions for every Druid metric|no|Default mapping provided. See 
below.|  
 |`druid.emitter.statsd.blankHolder`|The blank character replacement as statsD 
does not support path with blank character|no|"-"|  
+|`druid.emitter.statsd.dogstatsd`|Flag to enable 
[DogStatsD](https://docs.datadoghq.com/developers/dogstatsd/) support. Causes 
dimensions to be included as tags, not as a part of the metric name. 
`convertRange` fields will be ignored.|no|false|
 
 ### Druid to StatsD Event Converter
 
diff --git a/extensions-contrib/statsd-emitter/pom.xml 
b/extensions-contrib/statsd-emitter/pom.xml
index d8c49ab..e343795 100644
--- a/extensions-contrib/statsd-emitter/pom.xml
+++ b/extensions-contrib/statsd-emitter/pom.xml
@@ -41,9 +41,9 @@
   provided
 
 
-  com.timgroup
-  java-statsd-client
-  3.0.1
+  com.datadoghq
+  java-dogstatsd-client
+  2.6.1
 
 
   junit
diff --

[incubator-druid] branch master updated: Fix broken link in docs toc (#6728)

2018-12-12 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 5591468  Fix broken link in docs toc (#6728)
5591468 is described below

commit 55914687bb94041d60bb1c401608bdc017f30bc7
Author: Clint Wylie 
AuthorDate: Wed Dec 12 15:14:38 2018 -0800

Fix broken link in docs toc (#6728)

Change 'peon.html' to the correct link, 'peons.html'. No redirect is needed 
because the file has always been 'peons', just an incorrect link was introduced 
in the toc here 
https://github.com/apache/incubator-druid/pull/6259/files#diff-45297643736c5fb6da0e92f2c3df5d68R89
---
 docs/content/toc.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/toc.md b/docs/content/toc.md
index 2da4248..50b3187 100644
--- a/docs/content/toc.md
+++ b/docs/content/toc.md
@@ -108,7 +108,7 @@ layout: toc
 * [Indexing Service](/docs/VERSION/design/indexing-service.html)
   * [Overlord](/docs/VERSION/design/overlord.html)
   * [MiddleManager](/docs/VERSION/design/middlemanager.html)
-  * [Peons](/docs/VERSION/design/peon.html)
+  * [Peons](/docs/VERSION/design/peons.html)
 * [Realtime (Deprecated)](/docs/VERSION/design/realtime.html)
   * Dependencies
 * [Deep Storage](/docs/VERSION/dependencies/deep-storage.html)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Add support segmentGranularity for CompactionTask (#6758)

2019-01-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9ad6a73  Add support segmentGranularity for CompactionTask (#6758)
9ad6a73 is described below

commit 9ad6a733a58e81ef2e0dee067b1df8477af1dab4
Author: Jihoon Son 
AuthorDate: Thu Jan 3 17:50:45 2019 -0800

Add support segmentGranularity for CompactionTask (#6758)

* Add support segmentGranularity

* add doc and fix combination of options

* improve doc
---
 docs/content/ingestion/compaction.md   |  21 +-
 docs/content/ingestion/ingestion-spec.md   |   9 +-
 .../druid/indexing/common/task/CompactionTask.java | 148 ---
 .../common/task/CompactionTaskRunTest.java | 482 +
 .../indexing/common/task/CompactionTaskTest.java   | 274 +---
 .../druid/indexing/common/task/IndexTaskTest.java  |   2 +-
 .../indexing/common/task/IngestionTestBase.java| 100 -
 .../granularity/UniformGranularitySpec.java|   2 +-
 8 files changed, 916 insertions(+), 122 deletions(-)

diff --git a/docs/content/ingestion/compaction.md 
b/docs/content/ingestion/compaction.md
index cd7345f..2991584 100644
--- a/docs/content/ingestion/compaction.md
+++ b/docs/content/ingestion/compaction.md
@@ -34,6 +34,7 @@ Compaction tasks merge all segments of the given interval. 
The syntax is:
 "interval": ,
 "dimensions" ,
 "keepSegmentGranularity": ,
+"segmentGranularity": ,
 "targetCompactionSizeBytes": 
 "tuningConfig" ,
 "context": 
@@ -47,11 +48,23 @@ Compaction tasks merge all segments of the given interval. 
The syntax is:
 |`dataSource`|DataSource name to be compacted|Yes|
 |`interval`|Interval of segments to be compacted|Yes|
 |`dimensions`|Custom dimensionsSpec. compaction task will use this 
dimensionsSpec if exist instead of generating one. See below for more 
details.|No|
-|`keepSegmentGranularity`|If set to true, compactionTask will keep the time 
chunk boundaries and merge segments only if they fall into the same time 
chunk.|No (default = true)|
+|`segmentGranularity`|If this is set, compactionTask will change the segment 
granularity for the given interval. See [segmentGranularity of Uniform 
Granularity Spec](./ingestion-spec.html#uniform-granularity-spec) for more 
details. See the below table for the behavior.|No|
+|`keepSegmentGranularity`|Deprecated. Please use `segmentGranularity` instead. 
See the below table for its behavior.|No|
 |`targetCompactionSizeBytes`|Target segment size after comapction. Cannot be 
used with `targetPartitionSize`, `maxTotalRows`, and `numShards` in 
tuningConfig.|No|
 |`tuningConfig`|[Index task 
tuningConfig](../ingestion/native_tasks.html#tuningconfig)|No|
 |`context`|[Task 
context](../ingestion/locking-and-priority.html#task-context)|No|
 
+### Used segmentGranularity based on `segmentGranularity` and 
`keepSegmentGranularity`
+
+|SegmentGranularity|keepSegmentGranularity|Used SegmentGranularity|
+|--|--|---|
+|Non-null|True|Error|
+|Non-null|False|Given segmentGranularity|
+|Non-null|Null|Given segmentGranularity|
+|Null|True|Original segmentGranularity|
+|Null|False|ALL segmentGranularity. All events will fall into the single time 
chunk.|
+|Null|Null|Original segmentGranularity|
+
 An example of compaction task is
 
 ```json
@@ -63,9 +76,9 @@ An example of compaction task is
 ```
 
 This compaction task reads _all segments_ of the interval 
`2017-01-01/2018-01-01` and results in new segments.
-Note that intervals of the input segments are merged into a single interval of 
`2017-01-01/2018-01-01` no matter what the segmentGranularity was.
-To control the number of result segments, you can set `targetPartitionSize` or 
`numShards`. See 
[indexTuningConfig](../ingestion/native_tasks.html#tuningconfig) for more 
details.
-To merge each day's worth of data into separate segments, you can submit 
multiple `compact` tasks, one for each day. They will run in parallel.
+Since both `segmentGranularity` and `keepSegmentGranularity` are null, the 
original segment granularity will be remained and not changed after compaction.
+To control the number of result segments per time chunk, you can set 
`targetPartitionSize` or `numShards`. See 
[indexTuningConfig](../ingestion/native_tasks.html#tuningconfig) for more 
details.
+Please note that you can run multiple compactionTasks at the same time. For 
example, you can run 12 compactionTasks per month instead of running a single 
task for the entire year.
 
 A compaction task internally generates an `index` task spec for performing 
compaction work with some fixed parameters.
 For example, its `firehose` is always the 
[ingestSegmentSpec](./firehos

[incubator-druid] branch master updated: Show how to include classpath in command (#6802)

2019-01-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 0e04acc  Show how to include classpath in command (#6802)
0e04acc is described below

commit 0e04acca43eadcbcf96e3b0659e82d6c5336b019
Author: thomask 
AuthorDate: Fri Jan 4 03:31:55 2019 +0100

Show how to include classpath in command (#6802)

Would have saved me some time
---
 docs/content/operations/dump-segment.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/operations/dump-segment.md 
b/docs/content/operations/dump-segment.md
index 62d9398..3ce0354 100644
--- a/docs/content/operations/dump-segment.md
+++ b/docs/content/operations/dump-segment.md
@@ -31,7 +31,7 @@ complex metric values may not be complete.
 To run the tool, point it at a segment directory and provide a file for 
writing output:
 
 ```
-java org.apache.druid.cli.Main tools dump-segment \
+java -classpath "/my/druid/lib/*" org.apache.druid.cli.Main tools dump-segment 
\
   --directory /home/druid/path/to/segment/ \
   --out /home/druid/output.txt
 ```


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Fix issue that tasks failed because of no sink for identifier (#6724)

2019-01-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 636964f  Fix issue that tasks failed because of no sink for identifier 
(#6724)
636964f is described below

commit 636964fcb51648f0701dd861c76f9017cc03829d
Author: Mingming Qiu 
AuthorDate: Sat Jan 5 09:09:11 2019 +0800

Fix issue that tasks failed because of no sink for identifier (#6724)

* Fix issue that tasks failed because of no sink for identifier

* make find sinks to persist run in one callable together with the actual 
persist work

* Revert "make find sinks to persist run in one callable together with the 
actual persist work"

This reverts commit a24a2d80aeaf8f047d676e7260900fe916f36b78.
---
 .../druid/segment/realtime/appenderator/AppenderatorImpl.java| 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git 
a/server/src/main/java/org/apache/druid/segment/realtime/appenderator/AppenderatorImpl.java
 
b/server/src/main/java/org/apache/druid/segment/realtime/appenderator/AppenderatorImpl.java
index 9c837b0..22c1ccc 100644
--- 
a/server/src/main/java/org/apache/druid/segment/realtime/appenderator/AppenderatorImpl.java
+++ 
b/server/src/main/java/org/apache/druid/segment/realtime/appenderator/AppenderatorImpl.java
@@ -90,6 +90,7 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.HashMap;
+import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
@@ -485,8 +486,12 @@ public class AppenderatorImpl implements Appenderator
 final List> indexesToPersist = new 
ArrayList<>();
 int numPersistedRows = 0;
 long bytesPersisted = 0L;
-for (SegmentIdentifier identifier : sinks.keySet()) {
-  final Sink sink = sinks.get(identifier);
+Iterator> iterator = 
sinks.entrySet().iterator();
+
+while (iterator.hasNext()) {
+  final Map.Entry entry = iterator.next();
+  final SegmentIdentifier identifier = entry.getKey();
+  final Sink sink = entry.getValue();
   if (sink == null) {
 throw new ISE("No sink for identifier: %s", identifier);
   }


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Fix TaskLockbox when there are multiple intervals of the same start but differerent end (#6822)

2019-01-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 934c83b  Fix TaskLockbox when there are multiple intervals of the same 
start but differerent end (#6822)
934c83b is described below

commit 934c83bca6df5db913c130af039d8893a9fcb16a
Author: Jihoon Son 
AuthorDate: Wed Jan 9 19:38:27 2019 -0800

Fix TaskLockbox when there are multiple intervals of the same start but 
differerent end (#6822)

* Fix TaskLockbox when there are multiple intervals of the same start but 
differernt end

* fix build

* fix npe
---
 .../druid/indexing/overlord/TaskLockbox.java   | 139 +
 .../druid/indexing/overlord/TaskLockboxTest.java   |  43 +++
 2 files changed, 131 insertions(+), 51 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java
index 96451b7..626f4e3 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java
@@ -38,6 +38,7 @@ import org.apache.druid.java.util.common.Pair;
 import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.java.util.common.guava.Comparators;
 import org.apache.druid.java.util.emitter.EmittingLogger;
+import org.joda.time.DateTime;
 import org.joda.time.Interval;
 
 import javax.annotation.Nullable;
@@ -51,6 +52,7 @@ import java.util.Map;
 import java.util.NavigableMap;
 import java.util.NavigableSet;
 import java.util.Set;
+import java.util.SortedMap;
 import java.util.TreeMap;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.locks.Condition;
@@ -66,11 +68,14 @@ import java.util.stream.StreamSupport;
  */
 public class TaskLockbox
 {
-  // Datasource -> Interval -> list of (Tasks + TaskLock)
+  // Datasource -> startTime -> Interval -> list of (Tasks + TaskLock)
   // Multiple shared locks can be acquired for the same dataSource and 
interval.
   // Note that revoked locks are also maintained in this map to notify that 
those locks are revoked to the callers when
   // they acquire the same locks again.
-  private final Map>> 
running = new HashMap<>();
+  // Also, the key of the second inner map is the start time to find all 
intervals properly starting with the same
+  // startTime.
+  private final Map>>> running = new HashMap<>();
+
   private final TaskStorage taskStorage;
   private final ReentrantLock giant = new ReentrantLock(true);
   private final Condition lockReleaseCondition = giant.newCondition();
@@ -326,7 +331,14 @@ public class TaskLockbox
   final TaskLockType lockType
   )
   {
-return createOrFindLockPosse(task, interval, null, lockType);
+giant.lock();
+
+try {
+  return createOrFindLockPosse(task, interval, null, lockType);
+}
+finally {
+  giant.unlock();
+}
   }
 
   /**
@@ -584,7 +596,8 @@ public class TaskLockbox
   final TaskLockPosse posseToUse = new TaskLockPosse(
   new TaskLock(lockType, groupId, dataSource, interval, version, 
priority, revoked)
   );
-  running.computeIfAbsent(dataSource, k -> new 
TreeMap<>(Comparators.intervalsByStartThenEnd()))
+  running.computeIfAbsent(dataSource, k -> new TreeMap<>())
+ .computeIfAbsent(interval.getStart(), k -> new 
TreeMap<>(Comparators.intervalsByStartThenEnd()))
  .computeIfAbsent(interval, k -> new ArrayList<>())
  .add(posseToUse);
 
@@ -612,7 +625,7 @@ public class TaskLockbox
   CriticalAction action
   ) throws Exception
   {
-giant.lockInterruptibly();
+giant.lock();
 
 try {
   return action.perform(isTaskLocksValid(task, intervals));
@@ -624,13 +637,19 @@ public class TaskLockbox
 
   private boolean isTaskLocksValid(Task task, List intervals)
   {
-return intervals
-.stream()
-.allMatch(interval -> {
-  final TaskLock lock = getOnlyTaskLockPosseContainingInterval(task, 
interval).getTaskLock();
-  // Tasks cannot enter the critical section with a shared lock
-  return !lock.isRevoked() && lock.getType() != TaskLockType.SHARED;
-});
+giant.lock();
+try {
+  return intervals
+  .stream()
+  .allMatch(interval -> {
+final TaskLock lock = getOnlyTaskLockPosseContainingInterval(task, 
interval).getTaskLock();
+// Tasks cannot enter the critical section with a shared lock
+return !lock.isRevoked() && lock.getType() != TaskLockType.SHARED;
+  });
+}
+finally {
+  giant.unlock();
+}
   }
 
   private 

[incubator-druid] branch master updated: bugfix: Materialized view not support post-aggregator (#6689)

2019-01-10 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new ea3f426  bugfix: Materialized view not support post-aggregator (#6689)
ea3f426 is described below

commit ea3f4261712b063ae7629da6864ec59f73d6dbf4
Author: pzhdfy <982092...@qq.com>
AuthorDate: Fri Jan 11 06:25:09 2019 +0800

bugfix: Materialized view not support post-aggregator (#6689)

* bugfix: Materialized view not support post post-aggregator

* add unit test
---
 .../MaterializedViewQueryQueryToolChest.java   |  7 ++
 .../MaterializedViewQueryQueryToolChestTest.java   | 92 ++
 2 files changed, 99 insertions(+)

diff --git 
a/extensions-contrib/materialized-view-selection/src/main/java/org/apache/druid/query/materializedview/MaterializedViewQueryQueryToolChest.java
 
b/extensions-contrib/materialized-view-selection/src/main/java/org/apache/druid/query/materializedview/MaterializedViewQueryQueryToolChest.java
index 34b5566..39e2f19 100644
--- 
a/extensions-contrib/materialized-view-selection/src/main/java/org/apache/druid/query/materializedview/MaterializedViewQueryQueryToolChest.java
+++ 
b/extensions-contrib/materialized-view-selection/src/main/java/org/apache/druid/query/materializedview/MaterializedViewQueryQueryToolChest.java
@@ -74,6 +74,13 @@ public class MaterializedViewQueryQueryToolChest extends 
QueryToolChest
   }
 
   @Override
+  public Function makePostComputeManipulatorFn(Query query, 
MetricManipulationFn fn)
+  {
+Query realQuery = getRealQuery(query);
+return 
warehouse.getToolChest(realQuery).makePostComputeManipulatorFn(realQuery, fn);
+  }
+
+  @Override
   public TypeReference getResultTypeReference()
   {
 return null;
diff --git 
a/extensions-contrib/materialized-view-selection/src/test/java/org/apache/druid/query/materializedview/MaterializedViewQueryQueryToolChestTest.java
 
b/extensions-contrib/materialized-view-selection/src/test/java/org/apache/druid/query/materializedview/MaterializedViewQueryQueryToolChestTest.java
new file mode 100644
index 000..fae69e6
--- /dev/null
+++ 
b/extensions-contrib/materialized-view-selection/src/test/java/org/apache/druid/query/materializedview/MaterializedViewQueryQueryToolChestTest.java
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.query.materializedview;
+
+import com.google.common.base.Function;
+import com.google.common.collect.ImmutableMap;
+import org.apache.druid.java.util.common.DateTimes;
+import org.apache.druid.query.Druids;
+import org.apache.druid.query.MapQueryToolChestWarehouse;
+import org.apache.druid.query.Query;
+import org.apache.druid.query.QueryRunnerTestHelper;
+import org.apache.druid.query.QueryToolChest;
+import org.apache.druid.query.Result;
+import org.apache.druid.query.aggregation.AggregatorFactory;
+import org.apache.druid.query.aggregation.MetricManipulationFn;
+import org.apache.druid.query.timeseries.TimeseriesQuery;
+import org.apache.druid.query.timeseries.TimeseriesQueryQueryToolChest;
+import org.apache.druid.query.timeseries.TimeseriesResultValue;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.util.Map;
+
+public class MaterializedViewQueryQueryToolChestTest
+{
+  @Test
+  public void testMakePostComputeManipulatorFn()
+  {
+TimeseriesQuery realQuery = Druids.newTimeseriesQueryBuilder()
+
.dataSource(QueryRunnerTestHelper.dataSource)
+
.granularity(QueryRunnerTestHelper.dayGran)
+
.intervals(QueryRunnerTestHelper.fullOnInterval)
+
.aggregators(QueryRunnerTestHelper.rowsCount)
+.descending(true)
+.build();
+MaterializedViewQuery materializedViewQuery = new 
MaterializedViewQuery(realQuery, null);
+
+QueryToolChest materializedViewQueryQueryToolChest =
+new MaterializedV

[incubator-druid] branch master updated: Kafka version is updated (#6835)

2019-01-10 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 55927bf  Kafka version is updated (#6835)
55927bf is described below

commit 55927bf8e3b60e4aa48aa90f0eee2be92b6d57eb
Author: Furkan KAMACI 
AuthorDate: Fri Jan 11 04:58:40 2019 +0300

Kafka version is updated (#6835)

Update Kafka version in tutorial from 0.10.2.0 to 0.10.2.2
---
 docs/content/tutorials/tutorial-kafka.md | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/content/tutorials/tutorial-kafka.md 
b/docs/content/tutorials/tutorial-kafka.md
index 505481f..c6ce8c6 100644
--- a/docs/content/tutorials/tutorial-kafka.md
+++ b/docs/content/tutorials/tutorial-kafka.md
@@ -35,13 +35,13 @@ don't need to have loaded any data yet.
 ## Download and start Kafka
 
 [Apache Kafka](http://kafka.apache.org/) is a high throughput message bus that 
works well with
-Druid.  For this tutorial, we will use Kafka 0.10.2.0. To download Kafka, 
issue the following
+Druid.  For this tutorial, we will use Kafka 0.10.2.2. To download Kafka, 
issue the following
 commands in your terminal:
 
 ```bash
-curl -O https://archive.apache.org/dist/kafka/0.10.2.0/kafka_2.11-0.10.2.0.tgz
-tar -xzf kafka_2.11-0.10.2.0.tgz
-cd kafka_2.11-0.10.2.0
+curl -O https://archive.apache.org/dist/kafka/0.10.2.2/kafka_2.12-0.10.2.2.tgz
+tar -xzf kafka_2.12-0.10.2.2.tgz
+cd kafka_2.12-0.10.2.2
 ```
 
 Start a Kafka broker by running the following command in a new terminal:


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Fix auto compaction to compact only same or abutting intervals (#6808)

2019-01-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new a07e66c  Fix auto compaction to compact only same or abutting 
intervals (#6808)
a07e66c is described below

commit a07e66c5408e833d4bc62df1fd9fcd32f3e9ce89
Author: Jihoon Son 
AuthorDate: Wed Jan 16 14:54:11 2019 -0800

Fix auto compaction to compact only same or abutting intervals (#6808)

* Fix auto compaction to compact only same or abutting intervals

* fix test
---
 .../helper/NewestSegmentFirstIterator.java |  24 -
 .../DruidCoordinatorSegmentCompactorTest.java  |  12 +--
 .../helper/NewestSegmentFirstPolicyTest.java   | 113 +++--
 3 files changed, 91 insertions(+), 58 deletions(-)

diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/helper/NewestSegmentFirstIterator.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/helper/NewestSegmentFirstIterator.java
index 6a09f5b..8b58e86 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/helper/NewestSegmentFirstIterator.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/helper/NewestSegmentFirstIterator.java
@@ -269,6 +269,15 @@ public class NewestSegmentFirstIterator implements 
CompactionSegmentIterator
   final List> chunks = 
Lists.newArrayList(timeChunkHolder.getObject().iterator());
   final long timeChunkSizeBytes = chunks.stream().mapToLong(chunk -> 
chunk.getObject().getSize()).sum();
 
+  final boolean isSameOrAbuttingInterval;
+  final Interval lastInterval = 
segmentsToCompact.getIntervalOfLastSegment();
+  if (lastInterval == null) {
+isSameOrAbuttingInterval = true;
+  } else {
+final Interval currentInterval = 
chunks.get(0).getObject().getInterval();
+isSameOrAbuttingInterval = currentInterval.isEqual(lastInterval) || 
currentInterval.abuts(lastInterval);
+  }
+
   // The segments in a holder should be added all together or not.
   final boolean isCompactibleSize = SegmentCompactorUtil.isCompactibleSize(
   inputSegmentSize,
@@ -280,7 +289,10 @@ public class NewestSegmentFirstIterator implements 
CompactionSegmentIterator
   segmentsToCompact.getNumSegments(),
   chunks.size()
   );
-  if (isCompactibleSize && isCompactibleNum && (!keepSegmentGranularity || 
segmentsToCompact.isEmpty())) {
+  if (isCompactibleSize
+  && isCompactibleNum
+  && isSameOrAbuttingInterval
+  && (!keepSegmentGranularity || segmentsToCompact.isEmpty())) {
 chunks.forEach(chunk -> segmentsToCompact.add(chunk.getObject()));
   } else {
 if (segmentsToCompact.getNumSegments() > 1) {
@@ -514,6 +526,16 @@ public class NewestSegmentFirstIterator implements 
CompactionSegmentIterator
   return segments.isEmpty();
 }
 
+@Nullable
+private Interval getIntervalOfLastSegment()
+{
+  if (segments.isEmpty()) {
+return null;
+  } else {
+return segments.get(segments.size() - 1).getInterval();
+  }
+}
+
 private int getNumSegments()
 {
   return segments.size();
diff --git 
a/server/src/test/java/org/apache/druid/server/coordinator/helper/DruidCoordinatorSegmentCompactorTest.java
 
b/server/src/test/java/org/apache/druid/server/coordinator/helper/DruidCoordinatorSegmentCompactorTest.java
index 711c181..52b78e2 100644
--- 
a/server/src/test/java/org/apache/druid/server/coordinator/helper/DruidCoordinatorSegmentCompactorTest.java
+++ 
b/server/src/test/java/org/apache/druid/server/coordinator/helper/DruidCoordinatorSegmentCompactorTest.java
@@ -226,23 +226,23 @@ public class DruidCoordinatorSegmentCompactorTest
 expectedVersionSupplier
 );
 
-// compact for 2017-01-07T12:00:00.000Z/2017-01-08T12:00:00.000Z
-expectedRemainingSegments -= 40;
+// compact for 2017-01-08T00:00:00.000Z/2017-01-08T12:00:00.000Z
+expectedRemainingSegments -= 20;
 assertCompactSegments(
 compactor,
 keepSegmentGranularity,
-Intervals.of("2017-01-%02dT12:00:00/2017-01-%02dT12:00:00", 4, 8),
+Intervals.of("2017-01-%02dT00:00:00/2017-01-%02dT12:00:00", 8, 8),
 expectedRemainingSegments,
 expectedCompactTaskCount,
 expectedVersionSupplier
 );
 
-for (int endDay = 4; endDay > 1; endDay -= 1) {
+for (int endDay = 5; endDay > 1; endDay -= 1) {
   expectedRemainingSegments -= 40;
   assertCompactSegments(
   compactor,
   keepSegmentGranularity,
-  Intervals.of("2017-01-%02dT12:00:00/2017-01-%02dT12:00:00", endDay - 
1, endDay),
+  Intervals.of("2017-01-%02dT00:00:00/2017-01-%02dT00:00:00"

[incubator-druid] branch master updated: Improve doc for auto compaction (#6782)

2019-01-23 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 3b020fd  Improve doc for auto compaction (#6782)
3b020fd is described below

commit 3b020fd81bebecf52dc7edb48047008052603a71
Author: Jihoon Son 
AuthorDate: Wed Jan 23 16:21:45 2019 -0800

Improve doc for auto compaction (#6782)

* Improve doc for auto compaction

* address comments

* address comments

* address comments
---
 docs/content/configuration/index.md | 19 +++---
 docs/content/design/coordinator.md  | 40 -
 2 files changed, 38 insertions(+), 21 deletions(-)

diff --git a/docs/content/configuration/index.md 
b/docs/content/configuration/index.md
index fa66424..6d80de3 100644
--- a/docs/content/configuration/index.md
+++ b/docs/content/configuration/index.md
@@ -828,14 +828,14 @@ A description of the compaction config is:
 ||---||
 |`dataSource`|dataSource name to be compacted.|yes|
 |`keepSegmentGranularity`|Set 
[keepSegmentGranularity](../ingestion/compaction.html) to true for 
compactionTask.|no (default = true)|
-|`taskPriority`|[Priority](../ingestion/tasks.html#task-priorities) of compact 
task.|no (default = 25)|
-|`inputSegmentSizeBytes`|Total input segment size of a compactionTask.|no 
(default = 419430400)|
-|`targetCompactionSizeBytes`|The target segment size of compaction. The actual 
size of a compact segment might be slightly larger or smaller than this value. 
This configuration cannot be used together with `maxRowsPerSegment`.|no 
(default = 419430400 if `maxRowsPerSegment` is not specified)|
+|`taskPriority`|[Priority](../ingestion/tasks.html#task-priorities) of 
compaction task.|no (default = 25)|
+|`inputSegmentSizeBytes`|Maximum number of total segment bytes processed per 
compaction task. Since a time chunk must be processed in its entirety, if the 
segments for a particular time chunk have a total size in bytes greater than 
this parameter, compaction will not run for that time chunk. Because each 
compaction task runs with a single thread, setting this value too far above 
1–2GB will result in compaction tasks taking an excessive amount of time.|no 
(default = 419430400)|
+|`targetCompactionSizeBytes`|The target segment size, for each segment, after 
compaction. The actual sizes of compacted segments might be slightly larger or 
smaller than this value. Each compaction task may generate more than one output 
segment, and it will try to keep each output segment close to this configured 
size. This configuration cannot be used together with `maxRowsPerSegment`.|no 
(default = 419430400)|
 |`maxRowsPerSegment`|Max number of rows per segment after compaction. This 
configuration cannot be used together with `targetCompactionSizeBytes`.|no|
-|`maxNumSegmentsToCompact`|Max number of segments to compact together.|no 
(default = 150)|
+|`maxNumSegmentsToCompact`|Maximum number of segments to compact together per 
compaction task. Since a time chunk must be processed in its entirety, if a 
time chunk has a total number of segments greater than this parameter, 
compaction will not run for that time chunk.|no (default = 150)|
 |`skipOffsetFromLatest`|The offset for searching segments to be compacted. 
Strongly recommended to set for realtime dataSources. |no (default = "P1D")|
-|`tuningConfig`|Tuning config for compact tasks. See [Compaction 
TuningConfig](#compaction-tuningconfig).|no|
-|`taskContext`|[Task context](../ingestion/tasks.html#task-context) for 
compact tasks.|no|
+|`tuningConfig`|Tuning config for compaction tasks. See below [Compaction Task 
TuningConfig](#compact-task-tuningconfig).|no|
+|`taskContext`|[Task context](../ingestion/tasks.html#task-context) for 
compaction tasks.|no|
 
 An example of compaction config is:
 
@@ -845,7 +845,12 @@ An example of compaction config is:
 }
 ```
 
-For realtime dataSources, it's recommended to set `skipOffsetFromLatest` to 
some sufficiently large value to avoid frequent compact task failures.
+Note that compaction tasks can fail if their locks are revoked by other tasks 
of higher priorities.
+Since realtime tasks have a higher priority than compaction task by default,
+it can be problematic if there are frequent conflicts between compaction tasks 
and realtime tasks.
+If this is the case, the coordinator's automatic compaction might get stuck 
because of frequent compaction task failures.
+This kind of problem may happen especially in Kafka/Kinesis indexing systems 
which allow late data arrival.
+If you see this problem, it's recommended to set `skipOffsetFromLatest` to 
some large enough value to avoid such conflicts between compaction tasks and 
realtime tasks.
 
 # Compaction TuningConfig
 
diff --git a/docs/content/design/coord

[incubator-druid] branch master updated: Some adjustments to config examples. (#6973)

2019-01-31 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 4e42632  Some adjustments to config examples. (#6973)
4e42632 is described below

commit 4e426327bb51674a2112b17ba6347405837beef2
Author: Gian Merlino 
AuthorDate: Thu Jan 31 17:59:39 2019 -0800

Some adjustments to config examples. (#6973)

* Some adjustments to config examples.

- Add ExitOnOutOfMemoryError to jvm.config examples. It was added a
pretty long time ago (8u92) and is helpful since it prevents zombie
processes from hanging around. (OOMEs tend to bork things)
- Disable Broker caching and enable it on Historicals in example
configs. This config tends to scale better since it enables the
Historicals to merge results rather than sending everything by-segment
to the Broker. Also switch to "caffeine" cache from "local".
- Increase concurrency a bit for Broker example config.
- Enable SQL in the example config, a baby step towards making SQL
more of a thing. (It's still off by default in the code.)
- Reduce memory use a bit for the quickstart configs.
- Add example Router configs, in case someone wants to use that. One
reason might be to get the fancy new console (#6923).

* Add example Router configs.

* Fix up router example properties.

* Add router to quickstart supervise conf.
---
 .../conf/druid/_common/common.runtime.properties   |  5 +++
 examples/conf/druid/broker/jvm.config  |  1 +
 examples/conf/druid/broker/runtime.properties  | 19 +-
 examples/conf/druid/coordinator/jvm.config |  1 +
 examples/conf/druid/historical/jvm.config  |  1 +
 examples/conf/druid/historical/runtime.properties  |  6 
 examples/conf/druid/middleManager/jvm.config   |  1 +
 .../conf/druid/middleManager/runtime.properties|  2 +-
 examples/conf/druid/overlord/jvm.config|  1 +
 .../conf/druid/{historical => router}/jvm.config   |  8 +++--
 .../{historical => router}/runtime.properties  | 22 ++--
 .../tutorial/conf/druid/broker/jvm.config  |  7 ++--
 .../tutorial/conf/druid/broker/runtime.properties  | 42 --
 .../tutorial/conf/druid/coordinator/jvm.config |  5 +--
 .../conf/druid/coordinator/runtime.properties  | 19 ++
 .../tutorial/conf/druid/historical/jvm.config  |  7 ++--
 .../conf/druid/historical/runtime.properties   | 31 ++--
 .../tutorial/conf/druid/middleManager/jvm.config   |  1 +
 .../conf/druid/middleManager/runtime.properties| 21 ++-
 .../tutorial/conf/druid/overlord/jvm.config|  5 +--
 .../conf/druid/overlord/runtime.properties | 19 ++
 .../tutorial/conf/druid/router}/jvm.config |  8 +++--
 .../tutorial/conf/druid/router}/runtime.properties | 22 ++--
 .../quickstart/tutorial/conf/tutorial-cluster.conf |  1 +
 24 files changed, 196 insertions(+), 59 deletions(-)

diff --git a/examples/conf/druid/_common/common.runtime.properties 
b/examples/conf/druid/_common/common.runtime.properties
index b7d1870..9db6060 100644
--- a/examples/conf/druid/_common/common.runtime.properties
+++ b/examples/conf/druid/_common/common.runtime.properties
@@ -120,3 +120,8 @@ druid.emitter.logging.logLevel=info
 # ommiting this will lead to index double as float at the storage layer
 
 druid.indexing.doubleStorage=double
+
+#
+# SQL
+#
+druid.sql.enable=true
diff --git a/examples/conf/druid/broker/jvm.config 
b/examples/conf/druid/broker/jvm.config
index a6a9982..cf67f93 100644
--- a/examples/conf/druid/broker/jvm.config
+++ b/examples/conf/druid/broker/jvm.config
@@ -2,6 +2,7 @@
 -Xms24g
 -Xmx24g
 -XX:MaxDirectMemorySize=4096m
+-XX:+ExitOnOutOfMemoryError
 -Duser.timezone=UTC
 -Dfile.encoding=UTF-8
 -Djava.io.tmpdir=var/tmp
diff --git a/examples/conf/druid/broker/runtime.properties 
b/examples/conf/druid/broker/runtime.properties
index 75a3ccd..9421053 100644
--- a/examples/conf/druid/broker/runtime.properties
+++ b/examples/conf/druid/broker/runtime.properties
@@ -20,16 +20,17 @@
 druid.service=druid/broker
 druid.plaintextPort=8082
 
-# HTTP server threads
-druid.broker.http.numConnections=5
-druid.server.http.numThreads=25
+# HTTP server settings
+druid.server.http.numThreads=60
+
+# HTTP client settings
+druid.broker.http.numConnections=10
 
 # Processing threads and buffers
 druid.processing.buffer.sizeBytes=536870912
-druid.processing.numThreads=7
+druid.processing.numMergeBuffers=2
+druid.processing.numThreads=1
 
-# Query cache
-druid.broker.cache.useCache=true
-druid.broker.cache.populateCache=true
-druid.cache.type=local
-druid.cache.sizeInBytes=20
+# Query cache disabled -- push down caching and merging instead
+dr

[incubator-druid] branch master updated: Add several missing inspectRuntimeShape() calls (#6893)

2019-01-31 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new f7df5fe  Add several missing inspectRuntimeShape() calls (#6893)
f7df5fe is described below

commit f7df5fedcc452a8ba4d8e26a560a1ae52a0e85a5
Author: Roman Leventov 
AuthorDate: Fri Feb 1 11:04:26 2019 +0700

Add several missing inspectRuntimeShape() calls (#6893)

* Add several missing inspectRuntimeShape() calls

* Add lgK to runtime shapes
---
 .../druid/benchmark/ExpressionAggregationBenchmark.java |  9 +
 .../hll/HllSketchBuildBufferAggregator.java | 16 +---
 .../datasketches/hll/HllSketchMergeAggregator.java  |  4 ++--
 .../hll/HllSketchMergeBufferAggregator.java | 17 ++---
 .../FixedBucketsHistogramAggregatorFactory.java |  9 -
 .../druid/query/aggregation/AggregateCombiner.java  |  5 +++--
 .../query/aggregation/NullableBufferAggregator.java | 17 -
 .../monomorphicprocessing/RuntimeShapeInspector.java|  4 ++--
 8 files changed, 59 insertions(+), 22 deletions(-)

diff --git 
a/benchmarks/src/main/java/org/apache/druid/benchmark/ExpressionAggregationBenchmark.java
 
b/benchmarks/src/main/java/org/apache/druid/benchmark/ExpressionAggregationBenchmark.java
index ed56d74..69b8f80 100644
--- 
a/benchmarks/src/main/java/org/apache/druid/benchmark/ExpressionAggregationBenchmark.java
+++ 
b/benchmarks/src/main/java/org/apache/druid/benchmark/ExpressionAggregationBenchmark.java
@@ -32,6 +32,7 @@ import org.apache.druid.query.aggregation.BufferAggregator;
 import org.apache.druid.query.aggregation.DoubleSumAggregatorFactory;
 import org.apache.druid.query.aggregation.JavaScriptAggregatorFactory;
 import org.apache.druid.query.expression.TestExprMacroTable;
+import org.apache.druid.query.monomorphicprocessing.RuntimeShapeInspector;
 import org.apache.druid.segment.BaseFloatColumnValueSelector;
 import org.apache.druid.segment.ColumnSelectorFactory;
 import org.apache.druid.segment.Cursor;
@@ -240,10 +241,18 @@ public class ExpressionAggregationBenchmark
 {
   throw new UnsupportedOperationException();
 }
+
 @Override
 public void close()
 {
+  // nothing to close
+}
 
+@Override
+public void inspectRuntimeShape(RuntimeShapeInspector inspector)
+{
+  inspector.visit("xSelector", xSelector);
+  inspector.visit("ySelector", ySelector);
 }
   }
 }
diff --git 
a/extensions-core/datasketches/src/main/java/org/apache/druid/query/aggregation/datasketches/hll/HllSketchBuildBufferAggregator.java
 
b/extensions-core/datasketches/src/main/java/org/apache/druid/query/aggregation/datasketches/hll/HllSketchBuildBufferAggregator.java
index a170502..0ec525e 100644
--- 
a/extensions-core/datasketches/src/main/java/org/apache/druid/query/aggregation/datasketches/hll/HllSketchBuildBufferAggregator.java
+++ 
b/extensions-core/datasketches/src/main/java/org/apache/druid/query/aggregation/datasketches/hll/HllSketchBuildBufferAggregator.java
@@ -26,6 +26,7 @@ import com.yahoo.sketches.hll.TgtHllType;
 import it.unimi.dsi.fastutil.ints.Int2ObjectMap;
 import it.unimi.dsi.fastutil.ints.Int2ObjectOpenHashMap;
 import org.apache.druid.query.aggregation.BufferAggregator;
+import org.apache.druid.query.monomorphicprocessing.RuntimeShapeInspector;
 import org.apache.druid.segment.ColumnValueSelector;
 
 import java.nio.ByteBuffer;
@@ -42,7 +43,7 @@ import java.util.concurrent.locks.ReadWriteLock;
 public class HllSketchBuildBufferAggregator implements BufferAggregator
 {
 
-  // for locking per buffer position (power of 2 to make index computation 
faster)
+  /** for locking per buffer position (power of 2 to make index computation 
faster) */
   private static final int NUM_STRIPES = 64;
 
   private final ColumnValueSelector selector;
@@ -73,7 +74,7 @@ public class HllSketchBuildBufferAggregator implements 
BufferAggregator
 putSketchIntoCache(buf, position, new HllSketch(lgK, tgtHllType, mem));
   }
 
-  /*
+  /**
* This method uses locks because it can be used during indexing,
* and Druid can call aggregate() and get() concurrently
* See https://github.com/druid-io/druid/pull/3956
@@ -96,7 +97,7 @@ public class HllSketchBuildBufferAggregator implements 
BufferAggregator
 }
   }
 
-  /*
+  /**
* This method uses locks because it can be used during indexing,
* and Druid can call aggregate() and get() concurrently
* See https://github.com/druid-io/druid/pull/3956
@@ -181,4 +182,13 @@ public class HllSketchBuildBufferAggregator implements 
BufferAggregator
 return hashCode ^ (hashCode >>> 7) ^ (hashCode >>> 4);
   }
 
+  @Override
+  public void inspectRuntimeShape(RuntimeShapeInspector inspector)
+  {
+inspector.visit("

[incubator-druid] branch master updated: Remove repeated word in indexing-service.md (#6983)

2019-02-01 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 852fe86  Remove repeated word in indexing-service.md (#6983)
852fe86 is described below

commit 852fe86ea2b361a97d1fe0770bc2457a4809d86e
Author: jorbay-au <40071830+jorbay...@users.noreply.github.com>
AuthorDate: Fri Feb 1 13:38:22 2019 -0800

Remove repeated word in indexing-service.md (#6983)
---
 docs/content/design/indexing-service.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/content/design/indexing-service.md 
b/docs/content/design/indexing-service.md
index 0d583b7..8a3d5a6 100644
--- a/docs/content/design/indexing-service.md
+++ b/docs/content/design/indexing-service.md
@@ -26,7 +26,7 @@ title: "Indexing Service"
 
 The indexing service is a highly-available, distributed service that runs 
indexing related tasks. 
 
-Indexing tasks [tasks](../ingestion/tasks.html) create (and sometimes destroy) 
Druid [segments](../design/segments.html). The indexing service has a 
master/slave like architecture.
+Indexing [tasks](../ingestion/tasks.html) create (and sometimes destroy) Druid 
[segments](../design/segments.html). The indexing service has a master/slave 
like architecture.
 
 The indexing service is composed of three main components: a 
[Peon](../design/peons.html) component that can run a single task, a [Middle 
Manager](../design/middlemanager.html) component that manages Peons, and an 
[Overlord](../design/overlord.html) component that manages task distribution to 
MiddleManagers.
 Overlords and MiddleManagers may run on the same node or across multiple nodes 
while MiddleManagers and Peons always run on the same node.


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Update metrics.md (#6976)

2019-02-01 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new e45f9ea  Update metrics.md (#6976)
e45f9ea is described below

commit e45f9ea5e9d28e2fd08799696b4722eb525d5376
Author: lxqfy 
AuthorDate: Sat Feb 2 05:40:44 2019 +0800

Update metrics.md (#6976)
---
 docs/content/operations/metrics.md | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/docs/content/operations/metrics.md 
b/docs/content/operations/metrics.md
index 7bce8c0..e8e8958 100644
--- a/docs/content/operations/metrics.md
+++ b/docs/content/operations/metrics.md
@@ -154,8 +154,7 @@ These metrics are only available if the 
RealtimeMetricsMonitor is included in th
 |`ingest/events/thrownAway`|Number of events rejected because they are outside 
the windowPeriod.|dataSource, taskId, taskType.|0|
 |`ingest/events/unparseable`|Number of events rejected because the events are 
unparseable.|dataSource, taskId, taskType.|0|
 |`ingest/events/duplicate`|Number of events rejected because the events are 
duplicated.|dataSource, taskId, taskType.|0|
-|`ingest/events/processed`|Number of events successfully processed per 
emission period.|dataSource, taskId, taskType.|Equal to your # of events per
-emission period.|
+|`ingest/events/processed`|Number of events successfully processed per 
emission period.|dataSource, taskId, taskType.|Equal to your # of events per 
emission period.|
 |`ingest/rows/output`|Number of Druid rows persisted.|dataSource, taskId, 
taskType.|Your # of events with rollup.|
 |`ingest/persists/count`|Number of times persist occurred.|dataSource, taskId, 
taskType.|Depends on configuration.|
 |`ingest/persists/time`|Milliseconds spent doing intermediate 
persist.|dataSource, taskId, taskType.|Depends on configuration. Generally a 
few minutes at most.|
@@ -252,9 +251,7 @@ The following metric is only available if the 
EventReceiverFirehoseMonitor modul
 
 |Metric|Description|Dimensions|Normal Value|
 |--|---|--||
-|`ingest/events/buffered`|Number of events queued in the 
EventReceiverFirehose's buffer|serviceName, dataSource, taskId, taskType, 
bufferCapacity
-.|Equal
-to current # of events in the buffer queue.|
+|`ingest/events/buffered`|Number of events queued in the 
EventReceiverFirehose's buffer|serviceName, dataSource, taskId, taskType, 
bufferCapacity.|Equal to current # of events in the buffer queue.|
 |`ingest/bytes/received`|Number of bytes received by the 
EventReceiverFirehose.|serviceName, dataSource, taskId, taskType.|Varies.|
 
 ## Sys


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: bloom filter sql aggregator (#6950)

2019-02-01 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 7a5827e  bloom filter sql aggregator (#6950)
7a5827e is described below

commit 7a5827e12eb65eef80e08fe86ef76604019d6af8
Author: Clint Wylie 
AuthorDate: Fri Feb 1 13:54:46 2019 -0800

bloom filter sql aggregator (#6950)

* adds sql aggregator for bloom filter, adds complex value serde for sql 
results

* fix tests

* checkstyle

* fix copy-paste
---
 docs/content/configuration/index.md|   1 +
 .../development/extensions-core/bloom-filter.md|  25 +-
 docs/content/querying/sql.md   |   1 +
 .../druid/guice/BloomFilterExtensionModule.java|   3 +-
 .../bloom/sql/BloomFilterSqlAggregator.java| 212 +++
 .../apache/druid/query/filter/BloomKFilter.java|   2 +-
 .../bloom/sql/BloomFilterSqlAggregatorTest.java| 642 +
 .../druid/sql/calcite/planner/PlannerConfig.java   |  14 +-
 .../apache/druid/sql/calcite/rel/QueryMaker.java   |  14 +-
 .../druid/sql/calcite/BaseCalciteQueryTest.java|   8 +
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  17 +-
 .../druid/sql/calcite/http/SqlResourceTest.java|   9 +-
 12 files changed, 936 insertions(+), 12 deletions(-)

diff --git a/docs/content/configuration/index.md 
b/docs/content/configuration/index.md
index 990f9ce..221cccd 100644
--- a/docs/content/configuration/index.md
+++ b/docs/content/configuration/index.md
@@ -1418,6 +1418,7 @@ The Druid SQL server is configured through the following 
properties on the Broke
 |`druid.sql.planner.useFallback`|Whether to evaluate operations on the Broker 
when they cannot be expressed as Druid queries. This option is not recommended 
for production since it can generate unscalable query plans. If false, SQL 
queries that cannot be translated to Druid queries will fail.|false|
 |`druid.sql.planner.requireTimeCondition`|Whether to require SQL to have 
filter conditions on __time column so that all generated native queries will 
have user specified intervals. If true, all queries wihout filter condition on 
__time column will fail|false|
 |`druid.sql.planner.sqlTimeZone`|Sets the default time zone for the server, 
which will affect how time functions and timestamp literals behave. Should be a 
time zone name like "America/Los_Angeles" or offset like "-08:00".|UTC|
+|`druid.sql.planner.serializeComplexValues`|Whether to serialize "complex" 
output values, false will return the class name instead of the serialized 
value.|true|
 
  Broker Caching
 
diff --git a/docs/content/development/extensions-core/bloom-filter.md 
b/docs/content/development/extensions-core/bloom-filter.md
index f878e75..651cc30 100644
--- a/docs/content/development/extensions-core/bloom-filter.md
+++ b/docs/content/development/extensions-core/bloom-filter.md
@@ -89,9 +89,9 @@ This string can then be used in the native or sql Druid query.
 
 Note: `org.apache.hive.common.util.BloomKFilter` provides a serialize method 
which can be used to serialize bloom filters to outputStream.
 
-### SQL Queries
+### Filtering SQL Queries
 
-Bloom filters are supported in SQL via the `bloom_filter_test` operator:
+Bloom filters can be used in SQL `WHERE` clauses via the `bloom_filter_test` 
operator:
 
 ```sql
 SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(, 
'')
@@ -108,7 +108,11 @@ bloom_filter_test(, 
'')
 
 ## Bloom Filter Query Aggregator
 
-Input for a `bloomKFilter` can also be created from a druid query with the 
`bloom` aggregator.
+Input for a `bloomKFilter` can also be created from a druid query with the 
`bloom` aggregator. Note that it is very 
+important to set a reasonable value for the `maxNumEntries` parameter, which 
is the maximum number of distinct entries 
+that the bloom filter can represent without increasing the false postive rate. 
It may be worth performing a query using
+one of the unique count sketches to calculate the value for this parameter in 
order to build a bloom filter appropriate 
+for the query. 
 
 ### JSON Specification of Bloom Filter Aggregator
 
@@ -157,8 +161,19 @@ response
 
[{"timestamp":"2015-09-12T00:00:00.000Z","result":{"userBloom":"BAAAJh..."}}]
 ```
 
-These values can then be set in the filter specification above. 
+These values can then be set in the filter specification described above. 
 
 Ordering results by a bloom filter aggregator, for example in a TopN query, 
will perform a comparatively expensive 
 linear scan _of the filter itself_ to count the number of set bits as a means 
of approximating how many items have been 
-added to the set. As such, ordering by an alternate aggregation is recommended 
if possible. 
\ No newlin

[incubator-druid] branch master updated: maintenance mode for Historical (#6349)

2019-02-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 97b6407  maintenance mode for Historical (#6349)
97b6407 is described below

commit 97b6407983f597fc039bb90339b41086bbaaea56
Author: Egor Riashin 
AuthorDate: Tue Feb 5 05:11:00 2019 +0300

maintenance mode for Historical (#6349)

* maintenance mode for Historical

forbidden api fix, config deserialization fix

logging fix, unit tests

* addressed comments

* addressed comments

* a style fix

* addressed comments

* a unit-test fix due to recent code-refactoring

* docs & refactoring

* addressed comments

* addressed a LoadRule drop flaw

* post merge cleaning up
---
 docs/content/configuration/index.md|   6 +-
 server/pom.xml |   6 +
 .../coordinator/CoordinatorDynamicConfig.java  | 144 ---
 .../druid/server/coordinator/DruidCluster.java |  14 +-
 .../druid/server/coordinator/DruidCoordinator.java |   9 +-
 .../DruidCoordinatorCleanupPendingSegments.java|   2 +-
 .../coordinator/DruidCoordinatorRuntimeParams.java |  13 +-
 .../druid/server/coordinator/ServerHolder.java |  18 ++
 .../helper/DruidCoordinatorBalancer.java   |  95 ---
 .../helper/DruidCoordinatorSegmentKiller.java  |   2 +-
 .../rules/BroadcastDistributionRule.java   |   5 +-
 .../druid/server/coordinator/rules/LoadRule.java   |  42 ++-
 .../http/CoordinatorDynamicConfigsResource.java|  31 ++-
 .../DruidCoordinatorBalancerProfiler.java  |   7 +-
 .../coordinator/DruidCoordinatorBalancerTest.java  | 259 ++-
 .../DruidCoordinatorBalancerTester.java|  14 +-
 .../DruidCoordinatorRuleRunnerTest.java|   6 +-
 .../rules/BroadcastDistributionRuleTest.java   | 113 +++-
 .../server/coordinator/rules/LoadRuleTest.java | 287 -
 .../server/http/CoordinatorDynamicConfigTest.java  |  92 +--
 20 files changed, 1012 insertions(+), 153 deletions(-)

diff --git a/docs/content/configuration/index.md 
b/docs/content/configuration/index.md
index 221cccd..c63eb41 100644
--- a/docs/content/configuration/index.md
+++ b/docs/content/configuration/index.md
@@ -779,7 +779,9 @@ A sample Coordinator dynamic config JSON object is shown 
below:
   "replicantLifetime": 15,
   "replicationThrottleLimit": 10,
   "emitBalancingStats": false,
-  "killDataSourceWhitelist": ["wikipedia", "testDatasource"]
+  "killDataSourceWhitelist": ["wikipedia", "testDatasource"],
+  "historicalNodesInMaintenance": ["localhost:8182", "localhost:8282"],
+  "nodesInMaintenancePriority": 7
 }
 ```
 
@@ -799,6 +801,8 @@ Issuing a GET request at the same URL will return the spec 
that is currently in
 |`killAllDataSources`|Send kill tasks for ALL dataSources if property 
`druid.coordinator.kill.on` is true. If this is set to true then 
`killDataSourceWhitelist` must not be specified or be empty list.|false|
 |`killPendingSegmentsSkipList`|List of dataSources for which pendingSegments 
are _NOT_ cleaned up if property `druid.coordinator.kill.pendingSegments.on` is 
true. This can be a list of comma-separated dataSources or a JSON array.|none|
 |`maxSegmentsInNodeLoadingQueue`|The maximum number of segments that could be 
queued for loading to any given server. This parameter could be used to speed 
up segments loading process, especially if there are "slow" nodes in the 
cluster (with low loading speed) or if too much segments scheduled to be 
replicated to some particular node (faster loading could be preferred to better 
segments distribution). Desired value depends on segments loading speed, 
acceptable replication time and numbe [...]
+|`historicalNodesInMaintenance`| List of Historical nodes in maintenance mode. 
Coordinator doesn't assign new segments on those nodes and moves segments from 
the nodes according to a specified priority.|none|
+|`nodesInMaintenancePriority`| Priority of segments from servers in 
maintenance. Coordinator takes ceil(maxSegmentsToMove * (priority / 10)) from 
servers in maitenance during balancing phase, i.e.:0 - no segments from 
servers in maintenance will be processed during balancing5 - 50% segments 
from servers in maintenance10 - 100% segments from servers in 
maintenanceBy leveraging the priority an operator can prevent general nodes 
from overload or decrease maitenance time instead.|7|
 
 To view the audit history of Coordinator dynamic config issue a GET request to 
the URL -
 
diff --git a/server/pom.xml b/server/pom.xml
index b28013b..ab2ca24 100644
---

[incubator-druid] branch master updated: Set version to 0.14.0-incubating-SNAPSHOT (#7003)

2019-02-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 8bc5eaa  Set version to 0.14.0-incubating-SNAPSHOT (#7003)
8bc5eaa is described below

commit 8bc5eaa908b38af81d2148b64c6eecfca2bafef6
Author: Jonathan Wei 
AuthorDate: Mon Feb 4 19:36:20 2019 -0800

Set version to 0.14.0-incubating-SNAPSHOT (#7003)
---
 aws-common/pom.xml   | 2 +-
 benchmarks/pom.xml   | 2 +-
 core/pom.xml | 2 +-
 distribution/pom.xml | 2 +-
 examples/pom.xml | 2 +-
 extendedset/pom.xml  | 2 +-
 extensions-contrib/ambari-metrics-emitter/pom.xml| 2 +-
 extensions-contrib/azure-extensions/pom.xml  | 2 +-
 extensions-contrib/cassandra-storage/pom.xml | 2 +-
 extensions-contrib/cloudfiles-extensions/pom.xml | 2 +-
 extensions-contrib/distinctcount/pom.xml | 2 +-
 extensions-contrib/druid-rocketmq/pom.xml| 2 +-
 extensions-contrib/google-extensions/pom.xml | 2 +-
 extensions-contrib/graphite-emitter/pom.xml  | 2 +-
 extensions-contrib/influx-extensions/pom.xml | 2 +-
 extensions-contrib/kafka-eight-simpleConsumer/pom.xml| 2 +-
 extensions-contrib/kafka-emitter/pom.xml | 2 +-
 extensions-contrib/materialized-view-maintenance/pom.xml | 2 +-
 extensions-contrib/materialized-view-selection/pom.xml   | 2 +-
 extensions-contrib/opentsdb-emitter/pom.xml  | 2 +-
 extensions-contrib/orc-extensions/pom.xml| 2 +-
 extensions-contrib/rabbitmq/pom.xml  | 2 +-
 extensions-contrib/redis-cache/pom.xml   | 2 +-
 extensions-contrib/sqlserver-metadata-storage/pom.xml| 2 +-
 extensions-contrib/statsd-emitter/pom.xml| 2 +-
 extensions-contrib/thrift-extensions/pom.xml | 2 +-
 extensions-contrib/time-min-max/pom.xml  | 2 +-
 extensions-contrib/virtual-columns/pom.xml   | 2 +-
 extensions-core/avro-extensions/pom.xml  | 2 +-
 extensions-core/datasketches/pom.xml | 2 +-
 extensions-core/druid-basic-security/pom.xml | 2 +-
 extensions-core/druid-bloom-filter/pom.xml   | 2 +-
 extensions-core/druid-kerberos/pom.xml   | 2 +-
 extensions-core/hdfs-storage/pom.xml | 2 +-
 extensions-core/histogram/pom.xml| 2 +-
 extensions-core/kafka-eight/pom.xml  | 2 +-
 extensions-core/kafka-extraction-namespace/pom.xml   | 2 +-
 extensions-core/kafka-indexing-service/pom.xml   | 2 +-
 extensions-core/kinesis-indexing-service/pom.xml | 2 +-
 extensions-core/lookups-cached-global/pom.xml| 2 +-
 extensions-core/lookups-cached-single/pom.xml| 2 +-
 extensions-core/mysql-metadata-storage/pom.xml   | 2 +-
 extensions-core/parquet-extensions/pom.xml   | 2 +-
 extensions-core/postgresql-metadata-storage/pom.xml  | 2 +-
 extensions-core/protobuf-extensions/pom.xml  | 2 +-
 extensions-core/s3-extensions/pom.xml| 2 +-
 extensions-core/simple-client-sslcontext/pom.xml | 2 +-
 extensions-core/stats/pom.xml| 2 +-
 hll/pom.xml  | 2 +-
 indexing-hadoop/pom.xml  | 2 +-
 indexing-service/pom.xml | 2 +-
 integration-tests/pom.xml| 2 +-
 pom.xml  | 2 +-
 processing/pom.xml   | 2 +-
 server/pom.xml   | 2 +-
 services/pom.xml | 2 +-
 sql/pom.xml  | 2 +-
 web-console/pom.xml  | 2 +-
 58 files changed, 58 insertions(+), 58 deletions(-)

diff --git a/aws-common/pom.xml b/aws-common/pom.xml
index 941dc28..2a65b70 100644
--- a/aws-common/pom.xml
+++ b/aws-common/pom.xml
@@ -28,7 +28,7 @@
 
 org.apache.druid
 druid
-0.13.0-incubating-SNAPSHOT
+0.14.0-incubating-SNAPSHOT
 
 
 
diff --git a/benchmarks/pom.xml b/benchmarks/pom.xml
index 9bd28e9..685ffe0 100644
--- a/benchmarks/pom.xml
+++ b/benchmarks/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.druid
 druid
-0.13.0-incubating-SNAPSHOT
+0.14.0-incubating-SNAPSHOT
   
 
   
diff --git a/core/pom.xml b/core/pom.xml
index 48bb5f3..9dbe3de 100644
--- a/core/pom.xml
+++ b/core/pom.xml

[incubator-druid] branch 0.14.0-incubating created (now 8bc5eaa)

2019-02-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.14.0-incubating
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git.


  at 8bc5eaa  Set version to 0.14.0-incubating-SNAPSHOT (#7003)

No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated: Create Scan Benchmark (#6986)

2019-02-06 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 6723243  Create Scan Benchmark (#6986)
6723243 is described below

commit 6723243ed2160eee139d55654798f120946bce98
Author: Justin Borromeo 
AuthorDate: Wed Feb 6 14:45:01 2019 -0800

Create Scan Benchmark (#6986)

* Moved Scan Builder to Druids class and started on Scan Benchmark setup

* Need to form queries

* It runs.

* Remove todos

* Change number of benchmark iterations

* Changed benchmark params

* More param changes

* Made Jon's changes and removed TODOs

* Broke some long lines into two lines

* Decrease segment size for less memory usage

* Committing a param change to kick teamcity
---
 .../{SelectBenchmark.java => ScanBenchmark.java}   | 291 ++---
 .../druid/benchmark/query/SelectBenchmark.java |   1 -
 .../main/java/org/apache/druid/query/Druids.java   | 157 +++
 .../org/apache/druid/query/scan/ScanQuery.java | 171 +---
 .../org/apache/druid/query/DoubleStorageTest.java  |  14 +-
 .../query/scan/MultiSegmentScanQueryTest.java  |   5 +-
 .../druid/query/scan/ScanQueryRunnerTest.java  |   5 +-
 .../druid/sql/calcite/BaseCalciteQueryTest.java|   5 +-
 8 files changed, 320 insertions(+), 329 deletions(-)

diff --git 
a/benchmarks/src/main/java/org/apache/druid/benchmark/query/SelectBenchmark.java
 b/benchmarks/src/main/java/org/apache/druid/benchmark/query/ScanBenchmark.java
similarity index 55%
copy from 
benchmarks/src/main/java/org/apache/druid/benchmark/query/SelectBenchmark.java
copy to 
benchmarks/src/main/java/org/apache/druid/benchmark/query/ScanBenchmark.java
index 36fd251..511de6b 100644
--- 
a/benchmarks/src/main/java/org/apache/druid/benchmark/query/SelectBenchmark.java
+++ 
b/benchmarks/src/main/java/org/apache/druid/benchmark/query/ScanBenchmark.java
@@ -20,8 +20,7 @@
 package org.apache.druid.benchmark.query;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
-import com.google.common.base.Supplier;
-import com.google.common.base.Suppliers;
+import com.google.common.collect.ImmutableList;
 import com.google.common.io.Files;
 import org.apache.commons.io.FileUtils;
 import org.apache.druid.benchmark.datagen.BenchmarkDataGenerator;
@@ -32,9 +31,9 @@ import org.apache.druid.data.input.Row;
 import org.apache.druid.hll.HyperLogLogHash;
 import org.apache.druid.jackson.DefaultObjectMapper;
 import org.apache.druid.java.util.common.concurrent.Execs;
-import org.apache.druid.java.util.common.granularity.Granularities;
 import org.apache.druid.java.util.common.guava.Sequence;
 import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.query.DefaultGenericQueryMetricsFactory;
 import org.apache.druid.query.Druids;
 import org.apache.druid.query.FinalizeResultsQueryRunner;
 import org.apache.druid.query.Query;
@@ -43,17 +42,18 @@ import org.apache.druid.query.QueryRunner;
 import org.apache.druid.query.QueryRunnerFactory;
 import org.apache.druid.query.QueryToolChest;
 import org.apache.druid.query.Result;
-import org.apache.druid.query.TableDataSource;
 import org.apache.druid.query.aggregation.hyperloglog.HyperUniquesSerde;
-import org.apache.druid.query.dimension.DefaultDimensionSpec;
-import org.apache.druid.query.select.EventHolder;
-import org.apache.druid.query.select.PagingSpec;
-import org.apache.druid.query.select.SelectQuery;
-import org.apache.druid.query.select.SelectQueryConfig;
-import org.apache.druid.query.select.SelectQueryEngine;
-import org.apache.druid.query.select.SelectQueryQueryToolChest;
-import org.apache.druid.query.select.SelectQueryRunnerFactory;
-import org.apache.druid.query.select.SelectResultValue;
+import org.apache.druid.query.extraction.StrlenExtractionFn;
+import org.apache.druid.query.filter.BoundDimFilter;
+import org.apache.druid.query.filter.DimFilter;
+import org.apache.druid.query.filter.InDimFilter;
+import org.apache.druid.query.filter.SelectorDimFilter;
+import org.apache.druid.query.scan.ScanQuery;
+import org.apache.druid.query.scan.ScanQueryConfig;
+import org.apache.druid.query.scan.ScanQueryEngine;
+import org.apache.druid.query.scan.ScanQueryQueryToolChest;
+import org.apache.druid.query.scan.ScanQueryRunnerFactory;
+import org.apache.druid.query.scan.ScanResultValue;
 import org.apache.druid.query.spec.MultipleIntervalSegmentSpec;
 import org.apache.druid.query.spec.QuerySegmentSpec;
 import org.apache.druid.segment.IncrementalIndexSegment;
@@ -62,7 +62,6 @@ import org.apache.druid.segment.IndexMergerV9;
 import org.apache.druid.segment.IndexSpec;
 import org.apache.druid.segment.QueryableIndex;
 import org.apache.druid.segment.QueryableIndexSegment;
-import org.apache.druid.segment.column.ColumnConfig;

[incubator-druid] branch master updated: Set version to 0.15.0-incubating-SNAPSHOT (#7014)

2019-02-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new fafbc4a  Set version to 0.15.0-incubating-SNAPSHOT (#7014)
fafbc4a is described below

commit fafbc4a80e0bba0c1fbb5066b9f0ce37ffcab929
Author: Jonathan Wei 
AuthorDate: Thu Feb 7 14:02:52 2019 -0800

Set version to 0.15.0-incubating-SNAPSHOT (#7014)
---
 aws-common/pom.xml   | 2 +-
 benchmarks/pom.xml   | 2 +-
 core/pom.xml | 2 +-
 distribution/pom.xml | 2 +-
 examples/pom.xml | 2 +-
 extendedset/pom.xml  | 2 +-
 extensions-contrib/ambari-metrics-emitter/pom.xml| 2 +-
 extensions-contrib/azure-extensions/pom.xml  | 2 +-
 extensions-contrib/cassandra-storage/pom.xml | 2 +-
 extensions-contrib/cloudfiles-extensions/pom.xml | 2 +-
 extensions-contrib/distinctcount/pom.xml | 2 +-
 extensions-contrib/druid-rocketmq/pom.xml| 2 +-
 extensions-contrib/google-extensions/pom.xml | 2 +-
 extensions-contrib/graphite-emitter/pom.xml  | 2 +-
 extensions-contrib/influx-extensions/pom.xml | 2 +-
 extensions-contrib/kafka-eight-simpleConsumer/pom.xml| 2 +-
 extensions-contrib/kafka-emitter/pom.xml | 2 +-
 extensions-contrib/materialized-view-maintenance/pom.xml | 2 +-
 extensions-contrib/materialized-view-selection/pom.xml   | 2 +-
 extensions-contrib/opentsdb-emitter/pom.xml  | 2 +-
 extensions-contrib/orc-extensions/pom.xml| 2 +-
 extensions-contrib/rabbitmq/pom.xml  | 2 +-
 extensions-contrib/redis-cache/pom.xml   | 2 +-
 extensions-contrib/sqlserver-metadata-storage/pom.xml| 2 +-
 extensions-contrib/statsd-emitter/pom.xml| 2 +-
 extensions-contrib/thrift-extensions/pom.xml | 2 +-
 extensions-contrib/time-min-max/pom.xml  | 2 +-
 extensions-contrib/virtual-columns/pom.xml   | 2 +-
 extensions-core/avro-extensions/pom.xml  | 2 +-
 extensions-core/datasketches/pom.xml | 2 +-
 extensions-core/druid-basic-security/pom.xml | 2 +-
 extensions-core/druid-bloom-filter/pom.xml   | 2 +-
 extensions-core/druid-kerberos/pom.xml   | 2 +-
 extensions-core/hdfs-storage/pom.xml | 2 +-
 extensions-core/histogram/pom.xml| 2 +-
 extensions-core/kafka-eight/pom.xml  | 2 +-
 extensions-core/kafka-extraction-namespace/pom.xml   | 2 +-
 extensions-core/kafka-indexing-service/pom.xml   | 2 +-
 extensions-core/kinesis-indexing-service/pom.xml | 2 +-
 extensions-core/lookups-cached-global/pom.xml| 2 +-
 extensions-core/lookups-cached-single/pom.xml| 2 +-
 extensions-core/mysql-metadata-storage/pom.xml   | 2 +-
 extensions-core/parquet-extensions/pom.xml   | 2 +-
 extensions-core/postgresql-metadata-storage/pom.xml  | 2 +-
 extensions-core/protobuf-extensions/pom.xml  | 2 +-
 extensions-core/s3-extensions/pom.xml| 2 +-
 extensions-core/simple-client-sslcontext/pom.xml | 2 +-
 extensions-core/stats/pom.xml| 2 +-
 hll/pom.xml  | 2 +-
 indexing-hadoop/pom.xml  | 2 +-
 indexing-service/pom.xml | 2 +-
 integration-tests/pom.xml| 2 +-
 pom.xml  | 2 +-
 processing/pom.xml   | 2 +-
 server/pom.xml   | 2 +-
 services/pom.xml | 2 +-
 sql/pom.xml  | 2 +-
 web-console/pom.xml  | 2 +-
 58 files changed, 58 insertions(+), 58 deletions(-)

diff --git a/aws-common/pom.xml b/aws-common/pom.xml
index 2a65b70..b4e5e0d 100644
--- a/aws-common/pom.xml
+++ b/aws-common/pom.xml
@@ -28,7 +28,7 @@
 
 org.apache.druid
 druid
-0.14.0-incubating-SNAPSHOT
+0.15.0-incubating-SNAPSHOT
 
 
 
diff --git a/benchmarks/pom.xml b/benchmarks/pom.xml
index 685ffe0..80f2461 100644
--- a/benchmarks/pom.xml
+++ b/benchmarks/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.druid
 druid
-0.14.0-incubating-SNAPSHOT
+0.15.0-incubating-SNAPSHOT
   
 
   
diff --git a/core/pom.xml b/core/pom.xml
index 9dbe3de..deee460 100644
--- a/core/pom.xml
+++ b/core/pom.xml

[incubator-druid] branch master updated: [Issue #6967] NoClassDefFoundError when using druid-hdfs-storage (#7015)

2019-02-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 16a4a50  [Issue #6967] NoClassDefFoundError when using 
druid-hdfs-storage (#7015)
16a4a50 is described below

commit 16a4a50e9147a04dd47bf2eb897138197519b0bb
Author: Ankit Kothari 
AuthorDate: Fri Feb 8 18:26:37 2019 -0800

[Issue #6967] NoClassDefFoundError when using druid-hdfs-storage (#7015)

* Fix:
  1. hadoop-common dependency for druid-hdfs and druid-kerberos extensions
 Refactoring:
  2. Hadoop config call in the inner static class to avoid class path 
conflicts for stopGracefully kill

* Fix:
  1. hadoop-common test dependency

* Fix:
  1. Avoid issue of kill command once the job is actually completed
---
 extensions-core/druid-kerberos/pom.xml |   1 +
 extensions-core/hdfs-storage/pom.xml   | 138 +++--
 indexing-hadoop/pom.xml|  32 ++---
 .../indexing/common/task/HadoopIndexTask.java  |  64 ++
 4 files changed, 185 insertions(+), 50 deletions(-)

diff --git a/extensions-core/druid-kerberos/pom.xml 
b/extensions-core/druid-kerberos/pom.xml
index 5b94101..8740ab6 100644
--- a/extensions-core/druid-kerberos/pom.xml
+++ b/extensions-core/druid-kerberos/pom.xml
@@ -71,6 +71,7 @@
   org.apache.hadoop
   hadoop-common
   ${hadoop.compile.version}
+  compile
   
 
   commons-cli
diff --git a/extensions-core/hdfs-storage/pom.xml 
b/extensions-core/hdfs-storage/pom.xml
index ec4f014..07d1876 100644
--- a/extensions-core/hdfs-storage/pom.xml
+++ b/extensions-core/hdfs-storage/pom.xml
@@ -153,6 +153,130 @@
 
 
   org.apache.hadoop
+  hadoop-common
+  ${hadoop.compile.version}
+  compile
+  
+
+  commons-cli
+  commons-cli
+
+
+  commons-httpclient
+  commons-httpclient
+
+
+  log4j
+  log4j
+
+
+  commons-codec
+  commons-codec
+
+
+  commons-logging
+  commons-logging
+
+
+  commons-io
+  commons-io
+
+
+  commons-lang
+  commons-lang
+
+
+  org.apache.httpcomponents
+  httpclient
+
+
+  org.apache.httpcomponents
+  httpcore
+
+
+  org.codehaus.jackson
+  jackson-core-asl
+
+
+  org.codehaus.jackson
+  jackson-mapper-asl
+
+
+  org.apache.zookeeper
+  zookeeper
+
+
+  org.slf4j
+  slf4j-api
+
+
+  org.slf4j
+  slf4j-log4j12
+
+
+  javax.ws.rs
+  jsr311-api
+
+
+  com.google.code.findbugs
+  jsr305
+
+
+  org.mortbay.jetty
+  jetty-util
+
+
+  org.apache.hadoop
+  hadoop-annotations
+
+
+  com.google.protobuf
+  protobuf-java
+
+
+  com.sun.jersey
+  jersey-core
+
+
+  org.apache.curator
+  curator-client
+
+
+  org.apache.commons
+  commons-math3
+
+
+  com.google.guava
+  guava
+
+
+  org.apache.avro
+  avro
+
+
+  net.java.dev.jets3t
+  jets3t
+
+
+  com.sun.jersey
+  jersey-json
+
+
+  com.jcraft
+  jsch
+
+
+  org.mortbay.jetty
+  jetty
+
+
+  com.sun.jersey
+  jersey-server
+
+  
+
+
+  org.apache.hadoop
   hadoop-aws
   ${hadoop.compile.version}
   provided
@@ -165,6 +289,13 @@
 
 
 
+  org.apache.hadoop
+  hadoop-common
+  ${hadoop.compile.version}
+  tests
+  test
+
+
   junit
   junit
   test
@@ -191,13 +322,6 @@
 
 
   org.apache.hadoop
-  hadoop-common

[druid] branch master updated (e839660b6a -> 1f1fced6d4)

2022-09-26 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


from e839660b6a Grab the thread name in a poisoned pool (#13143)
 add 1f1fced6d4 Add JsonInputFormat option to assume newline delimited 
JSON, improve parse exception handling for multiline JSON (#13089)

No new revisions were added by this update.

Summary of changes:
 .../druid/data/input/impl/JsonInputFormat.java |  75 +++--
 .../impl/{JsonReader.java => JsonNodeReader.java}  | 125 +
 .../input/impl/CloudObjectInputSourceTest.java |   8 +-
 .../druid/data/input/impl/JsonInputFormatTest.java |  12 +-
 .../druid/data/input/impl/JsonLineReaderTest.java  |  16 ++-
 ...JsonReaderTest.java => JsonNodeReaderTest.java} |  73 +++-
 .../druid/data/input/impl/JsonReaderTest.java  |  28 +++--
 docs/ingestion/data-formats.md |   7 ++
 .../data/input/aliyun/OssInputSourceTest.java  |  10 +-
 .../google/GoogleCloudStorageInputSourceTest.java  |   8 +-
 .../input/kafkainput/KafkaInputFormatTest.java |  15 ++-
 .../druid/indexing/kafka/KafkaSamplerSpecTest.java |   4 +-
 .../kafka/supervisor/KafkaSupervisorTest.java  |   2 +
 .../indexing/kinesis/KinesisSamplerSpecTest.java   |   2 +-
 .../kinesis/supervisor/KinesisSupervisorTest.java  |   3 +
 .../apache/druid/msq/querykit/DataSourcePlan.java  |   2 +-
 .../org/apache/druid/msq/exec/MSQSelectTest.java   |   2 +-
 .../druid/msq/indexing/error/MSQWarningsTest.java  |   2 +-
 .../external/ExternalInputSpecSlicerTest.java  |   2 +-
 .../druid/data/input/s3/S3InputSourceTest.java |  18 +--
 .../druid/indexing/common/task/IndexTaskTest.java  |   2 +-
 ...ltiPhaseParallelIndexingWithNullColumnTest.java |   6 +-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   2 +-
 .../parallel/ParallelIndexTestingFactory.java  |   2 +-
 .../PartialHashSegmentGenerateTaskTest.java|   4 +-
 .../parallel/SinglePhaseParallelIndexingTest.java  |   2 +
 .../batch/parallel/SinglePhaseSubTaskSpecTest.java |   2 +-
 .../overlord/sampler/InputSourceSamplerTest.java   |   2 +-
 .../SeekableStreamIndexTaskTestBase.java   |   2 +
 .../SeekableStreamSupervisorSpecTest.java  |   6 +-
 .../seekablestream/StreamChunkParserTest.java  |   6 +-
 .../SeekableStreamSupervisorStateTest.java |   4 +-
 website/.spelling  |   2 +
 33 files changed, 311 insertions(+), 145 deletions(-)
 copy core/src/main/java/org/apache/druid/data/input/impl/{JsonReader.java => 
JsonNodeReader.java} (56%)
 copy core/src/test/java/org/apache/druid/data/input/impl/{JsonReaderTest.java 
=> JsonNodeReaderTest.java} (88%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Add inline descriptor Protobuf bytes decoder (#13192)

2022-10-11 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9b8e69c99a Add inline descriptor Protobuf bytes decoder (#13192)
9b8e69c99a is described below

commit 9b8e69c99a410ba10496e375fc8cbb9c84f6d59b
Author: Jonathan Wei 
AuthorDate: Tue Oct 11 13:37:28 2022 -0500

Add inline descriptor Protobuf bytes decoder (#13192)

* Add inline descriptor Protobuf bytes decoder

* PR comments

* Update tests, check for IllegalArgumentException

* Fix license, add equals test

* Update 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/InlineDescriptorProtobufBytesDecoder.java

Co-authored-by: Frank Chen 

Co-authored-by: Frank Chen 
---
 docs/ingestion/data-formats.md |  20 
 extensions-core/protobuf-extensions/pom.xml|   5 +
 ...va => DescriptorBasedProtobufBytesDecoder.java} |  80 +++---
 .../protobuf/FileBasedProtobufBytesDecoder.java|  80 +++---
 .../InlineDescriptorProtobufBytesDecoder.java  |  95 
 .../data/input/protobuf/ProtobufBytesDecoder.java  |   3 +-
 .../FileBasedProtobufBytesDecoderTest.java |  20 
 .../InlineDescriptorProtobufBytesDecoderTest.java  | 123 +
 .../input/protobuf/ProtobufInputFormatTest.java|  31 +-
 website/.spelling  |   1 +
 10 files changed, 327 insertions(+), 131 deletions(-)

diff --git a/docs/ingestion/data-formats.md b/docs/ingestion/data-formats.md
index 22c1027647..db4e7f062f 100644
--- a/docs/ingestion/data-formats.md
+++ b/docs/ingestion/data-formats.md
@@ -1308,6 +1308,26 @@ Sample spec:
 }
 ```
 
+ Inline Descriptor Protobuf Bytes Decoder
+
+This Protobuf bytes decoder allows the user to provide the contents of a 
Protobuf descriptor file inline, encoded as a Base64 string, and then parse it 
to get schema used to decode the Protobuf record from bytes.
+
+| Field | Type | Description | Required |
+|---|--|-|--|
+| type | String | Set value to `inline`. | yes |
+| descriptorString | String | A compiled Protobuf descriptor, encoded as a 
Base64 string. | yes |
+| protoMessageType | String | Protobuf message type in the descriptor.  Both 
short name and fully qualified name are accepted. The parser uses the first 
message type found in the descriptor if not specified. | no |
+
+Sample spec:
+
+```json
+"protoBytesDecoder": {
+  "type": "inline",
+  "descriptorString": ,
+  "protoMessageType": "Metrics"
+}
+```
+
 # Confluent Schema Registry-based Protobuf Bytes Decoder
 
 This Protobuf bytes decoder first extracts a unique `id` from input message 
bytes, and then uses it to look up the schema in the Schema Registry used to 
decode the Avro record from bytes.
diff --git a/extensions-core/protobuf-extensions/pom.xml 
b/extensions-core/protobuf-extensions/pom.xml
index e2e8c4a116..c7b7fc6e8b 100644
--- a/extensions-core/protobuf-extensions/pom.xml
+++ b/extensions-core/protobuf-extensions/pom.xml
@@ -163,6 +163,11 @@
   ${project.parent.version}
   test
 
+
+  nl.jqno.equalsverifier
+  equalsverifier
+  test
+
   
 
   
diff --git 
a/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/FileBasedProtobufBytesDecoder.java
 
b/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/DescriptorBasedProtobufBytesDecoder.java
similarity index 58%
copy from 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/FileBasedProtobufBytesDecoder.java
copy to 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/DescriptorBasedProtobufBytesDecoder.java
index ed52f7443b..d4c65c6f99 100644
--- 
a/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/FileBasedProtobufBytesDecoder.java
+++ 
b/extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/DescriptorBasedProtobufBytesDecoder.java
@@ -19,7 +19,6 @@
 
 package org.apache.druid.data.input.protobuf;
 
-import com.fasterxml.jackson.annotation.JsonCreator;
 import com.fasterxml.jackson.annotation.JsonProperty;
 import com.github.os72.protobuf.dynamic.DynamicSchema;
 import com.google.common.annotations.VisibleForTesting;
@@ -29,52 +28,44 @@ import com.google.protobuf.DynamicMessage;
 import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.java.util.common.parsers.ParseException;
 
-import java.io.IOException;
-import java.io.InputStream;
-import java.net.MalformedURLException;
-import java.net.URL;
 import java.nio.ByteBuffer;
 import java.util.Objects;
 import java.uti

[druid] branch master updated (51f9831 -> 6b272c8)

2021-06-10 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 51f9831  Fix wrong encoding in 
PredicateFilteredDimensionSelector.getRow (#11339)
 add 6b272c8  adjust topn heap algorithm to only use known cardinality path 
when dictionary is unique (#11186)

No new revisions were added by this update.

Summary of changes:
 .../druid/query/topn/HeapBasedTopNAlgorithm.java   |  1 -
 .../query/topn/TimeExtractionTopNAlgorithm.java|  4 ---
 .../apache/druid/query/topn/TopNQueryEngine.java   |  2 +-
 .../types/StringTopNColumnAggregatesProcessor.java | 25 --
 .../TopNColumnAggregatesProcessorFactory.java  |  2 +-
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 39 ++
 6 files changed, 63 insertions(+), 10 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Allow kill task to mark segments as unused (#11501)

2021-07-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9b250c5  Allow kill task to mark segments as unused (#11501)
9b250c5 is described below

commit 9b250c54aa1b18c21ff8369ee4a4a6015bbafc40
Author: Jonathan Wei 
AuthorDate: Thu Jul 29 10:48:43 2021 -0500

Allow kill task to mark segments as unused (#11501)

* Allow kill task to mark segments as unused

* Add IndexerSQLMetadataStorageCoordinator test

* Update docs/ingestion/data-management.md

Co-authored-by: Jihoon Son 

* Add warning to kill task doc

Co-authored-by: Jihoon Son 
---
 docs/ingestion/data-management.md  |  9 ++-
 .../common/actions/MarkSegmentsAsUnusedAction.java | 67 --
 .../druid/indexing/common/actions/TaskAction.java  |  1 +
 .../common/task/KillUnusedSegmentsTask.java| 23 +++-
 ...ClientKillUnusedSegmentsTaskQuerySerdeTest.java |  8 ++-
 .../common/task/KillUnusedSegmentsTaskTest.java| 53 -
 .../druid/indexing/overlord/TaskLifecycleTest.java |  8 ++-
 .../TestIndexerMetadataStorageCoordinator.java |  6 ++
 .../ClientKillUnusedSegmentsTaskQuery.java | 20 +--
 .../client/indexing/HttpIndexingServiceClient.java |  2 +-
 .../IndexerMetadataStorageCoordinator.java | 10 
 .../IndexerSQLMetadataStorageCoordinator.java  | 27 +
 .../ClientKillUnusedSegmentsTaskQueryTest.java |  9 ++-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  | 32 +++
 14 files changed, 218 insertions(+), 57 deletions(-)

diff --git a/docs/ingestion/data-management.md 
b/docs/ingestion/data-management.md
index c9e592f..eb176a0 100644
--- a/docs/ingestion/data-management.md
+++ b/docs/ingestion/data-management.md
@@ -95,7 +95,9 @@ A data deletion tutorial is available at [Tutorial: Deleting 
data](../tutorials/
 
 ## Kill Task
 
-Kill tasks delete all information about a segment and removes it from deep 
storage. Segments to kill must be unused (used==0) in the Druid segment table. 
The available grammar is:
+The kill task deletes all information about segments and removes them from 
deep storage. Segments to kill must be unused (used==0) in the Druid segment 
table.
+
+The available grammar is:
 
 ```json
 {
@@ -103,10 +105,15 @@ Kill tasks delete all information about a segment and 
removes it from deep stora
 "id": ,
 "dataSource": ,
 "interval" : ,
+"markAsUnused": ,
 "context": 
 }
 ```
 
+If `markAsUnused` is true (default is false), the kill task will first mark 
any segments within the specified interval as unused, before deleting the 
unused segments within the interval.
+
+**WARNING!** The kill task permanently removes all information about the 
affected segments from the metadata store and deep storage. These segments 
cannot be recovered after the kill task runs, this operation cannot be undone. 
+
 ## Retention
 
 Druid supports retention rules, which are used to define intervals of time 
where data should be preserved, and intervals where data should be discarded.
diff --git 
a/server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/actions/MarkSegmentsAsUnusedAction.java
similarity index 52%
copy from 
server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
copy to 
indexing-service/src/main/java/org/apache/druid/indexing/common/actions/MarkSegmentsAsUnusedAction.java
index ec008d3..5ed7b7e 100644
--- 
a/server/src/main/java/org/apache/druid/client/indexing/ClientKillUnusedSegmentsTaskQuery.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/actions/MarkSegmentsAsUnusedAction.java
@@ -17,56 +17,34 @@
  * under the License.
  */
 
-package org.apache.druid.client.indexing;
+package org.apache.druid.indexing.common.actions;
 
 import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonIgnore;
 import com.fasterxml.jackson.annotation.JsonProperty;
-import com.google.common.base.Preconditions;
+import com.fasterxml.jackson.core.type.TypeReference;
+import org.apache.druid.indexing.common.task.Task;
 import org.joda.time.Interval;
 
-import java.util.Objects;
-
-/**
- * Client representation of 
org.apache.druid.indexing.common.task.KillUnusedSegmentsTask. JSON 
searialization
- * fields of this class must correspond to those of 
org.apache.druid.indexing.common.task.KillUnusedSegmentsTask, except
- * for "id" and "context" fields.
- */
-public class ClientKillUnusedSegmentsTaskQuery implements ClientTaskQuery
+public class MarkSegmentsAsUnusedAction implements TaskAction
 {
-  public static final String 

[druid] branch master updated: Task reports for parallel task: single phase and sequential mode (#11688)

2021-09-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 22b41dd  Task reports for parallel task: single phase and sequential 
mode (#11688)
22b41dd is described below

commit 22b41ddbbfe2b07b085e295ba171bcdc07e04900
Author: Jonathan Wei 
AuthorDate: Thu Sep 16 13:58:11 2021 -0500

Task reports for parallel task: single phase and sequential mode (#11688)

* Task reports for parallel task: single phase and sequential mode

* Address comments

* Add null check for currentSubTaskHolder
---
 .../druid/indexing/common/task/IndexTask.java  |  17 +-
 .../parallel/ParallelIndexSupervisorTask.java  | 222 ++-
 .../batch/parallel/PartialSegmentMergeTask.java|   6 +-
 .../task/batch/parallel/PushedSegmentsReport.java  |  22 +-
 .../task/batch/parallel/SinglePhaseSubTask.java| 315 ++---
 .../AbstractParallelIndexSupervisorTaskTest.java   |  19 +-
 .../ParallelIndexSupervisorTaskResourceTest.java   |   3 +-
 .../batch/parallel/PushedSegmentsReportTest.java   |  32 +++
 .../parallel/SinglePhaseParallelIndexingTest.java  | 169 ++-
 .../incremental/RowIngestionMetersTotals.java  |  35 +++
 .../incremental/RowIngestionMetersTotalsTest.java  |  32 +++
 .../client/indexing/HttpIndexingServiceClient.java |  26 ++
 .../client/indexing/IndexingServiceClient.java |   3 +
 .../indexing/HttpIndexingServiceClientTest.java|  67 +
 .../client/indexing/NoopIndexingServiceClient.java |   7 +
 15 files changed, 903 insertions(+), 72 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
index a798522..f22c2a0 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/IndexTask.java
@@ -286,7 +286,12 @@ public class IndexTask extends AbstractBatchIndexTask 
implements ChatHandler
   )
   {
 IndexTaskUtils.datasourceAuthorizationCheck(req, Action.READ, 
getDataSource(), authorizerMapper);
-Map> events = new HashMap<>();
+return Response.ok(doGetUnparseableEvents(full)).build();
+  }
+
+  public Map doGetUnparseableEvents(String full)
+  {
+Map events = new HashMap<>();
 
 boolean needsDeterminePartitions = false;
 boolean needsBuildSegments = false;
@@ -325,11 +330,10 @@ public class IndexTask extends AbstractBatchIndexTask 
implements ChatHandler
   )
   );
 }
-
-return Response.ok(events).build();
+return events;
   }
 
-  private Map doGetRowStats(String full)
+  public Map doGetRowStats(String full)
   {
 Map returnMap = new HashMap<>();
 Map totalsMap = new HashMap<>();
@@ -784,6 +788,11 @@ public class IndexTask extends AbstractBatchIndexTask 
implements ChatHandler
 return hllCollectors;
   }
 
+  public IngestionState getIngestionState()
+  {
+return ingestionState;
+  }
+
   /**
* This method reads input data row by row and adds the read row to a proper 
segment using {@link BaseAppenderatorDriver}.
* If there is no segment for the row, a new one is created.  Segments can 
be published in the middle of reading inputs
diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
index b66f49f..19b965c 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
@@ -26,6 +26,7 @@ import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import com.google.common.base.Throwables;
 import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.ImmutableMap;
 import com.google.common.collect.Multimap;
 import it.unimi.dsi.fastutil.objects.Object2IntMap;
 import it.unimi.dsi.fastutil.objects.Object2IntOpenHashMap;
@@ -66,6 +67,8 @@ import org.apache.druid.java.util.common.Pair;
 import org.apache.druid.java.util.common.StringUtils;
 import org.apache.druid.java.util.common.granularity.Granularity;
 import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.segment.incremental.RowIngestionMeters;
+import org.apache.druid.segment.incremental.RowIngestionMetersTotals;
 import org.apache.druid.segment.indexing.TuningConfig;
 import org.apache.druid.segment.indexing.granularity.Arbitr

[druid] branch master updated: Avoid primary key violation in segment tables under certain conditions when appending data to same interval (#11714)

2021-09-22 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 2355a60  Avoid primary key violation in segment tables under certain 
conditions when appending data to same interval (#11714)
2355a60 is described below

commit 2355a60419fe423faae9af7d95b97199b11309d7
Author: Agustin Gonzalez 
AuthorDate: Wed Sep 22 17:21:48 2021 -0700

Avoid primary key violation in segment tables under certain conditions when 
appending data to same interval (#11714)

* Fix issue of duplicate key  under certain conditions when loading late 
data in streaming. Also fixes a documentation issue with 
skipSegmentLineageCheck.

* maxId may be null at this point, need to check for that

* Remove hypothetical case (it cannot happen)

* Revert compaction is simply "killing" the compacted segment and 
previously, used, overshadowed segments are visible again

* Add comments
---
 .../IndexerSQLMetadataStorageCoordinator.java  |  85 +++--
 .../appenderator/BaseAppenderatorDriver.java   |   2 +-
 .../realtime/appenderator/SegmentAllocator.java|   2 +-
 .../appenderator/StreamAppenderatorDriver.java |   6 +-
 .../IndexerSQLMetadataStorageCoordinatorTest.java  | 407 -
 5 files changed, 476 insertions(+), 26 deletions(-)

diff --git 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
index 4887c90..c5081f9 100644
--- 
a/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
+++ 
b/server/src/main/java/org/apache/druid/metadata/IndexerSQLMetadataStorageCoordinator.java
@@ -253,13 +253,13 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
 return numSegmentsMarkedUnused;
   }
 
-  private List getPendingSegmentsForIntervalWithHandle(
+  private Set getPendingSegmentsForIntervalWithHandle(
   final Handle handle,
   final String dataSource,
   final Interval interval
   ) throws IOException
   {
-final List identifiers = new ArrayList<>();
+final Set identifiers = new HashSet<>();
 
 final ResultIterator dbSegments =
 handle.createQuery(
@@ -843,15 +843,30 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
   .execute();
   }
 
+  /**
+   * This function creates a new segment for the given 
datasource/interval/etc. A critical
+   * aspect of the creation is to make sure that the new version & new 
partition number will make
+   * sense given the existing segments & pending segments also very important 
is to avoid
+   * clashes with existing pending & used/unused segments.
+   * @param handle Database handle
+   * @param dataSource datasource for the new segment
+   * @param interval interval for the new segment
+   * @param partialShardSpec Shard spec info minus segment id stuff
+   * @param existingVersion Version of segments in interval, used to compute 
the version of the very first segment in
+   *interval
+   * @return
+   * @throws IOException
+   */
   @Nullable
   private SegmentIdWithShardSpec createNewSegment(
   final Handle handle,
   final String dataSource,
   final Interval interval,
   final PartialShardSpec partialShardSpec,
-  final String maxVersion
+  final String existingVersion
   ) throws IOException
   {
+// Get the time chunk and associated data segments for the given interval, 
if any
 final List> existingChunks = 
getTimelineForIntervalsWithHandle(
 handle,
 dataSource,
@@ -884,66 +899,94 @@ public class IndexerSQLMetadataStorageCoordinator 
implements IndexerMetadataStor
 // See PartitionIds.
 .filter(segment -> 
segment.getShardSpec().sharePartitionSpace(partialShardSpec))) {
   // Don't use the stream API for performance.
+  // Note that this will compute the max id of existing, visible, data 
segments in the time chunk:
   if (maxId == null || maxId.getShardSpec().getPartitionNum() < 
segment.getShardSpec().getPartitionNum()) {
 maxId = SegmentIdWithShardSpec.fromDataSegment(segment);
   }
 }
   }
 
-  final List pendings = 
getPendingSegmentsForIntervalWithHandle(
+  // Get the version of the existing chunk, we might need it in some of 
the cases below
+  // to compute the new identifier's version
+  @Nullable
+  final String versionOfExistingChunk;
+  if (!existingChunks.isEmpty()) {
+// remember only one chunk possible for given interval so get the 
first & only one
+versionOfExistingChunk = existingChunks.get(0).getVersion(

[druid] branch master updated: Minor processor quota computation fix + docs (#11783)

2021-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new b6b42d3  Minor processor quota computation fix + docs (#11783)
b6b42d3 is described below

commit b6b42d39367f1ff1d7a1aa7e0064ca0ed9c2e92f
Author: Arun Ramani <84351090+arunram...@users.noreply.github.com>
AuthorDate: Fri Oct 8 20:52:03 2021 -0700

Minor processor quota computation fix + docs (#11783)

* cpu/cpuset cgroup and procfs data gathering

* Renames and default values

* Formatting

* Trigger Build

* Add cgroup monitors

* Return 0 if no period

* Update

* Minor processor quota computation fix + docs

* Address comments

* Address comments

* Fix spellcheck

Co-authored-by: arunramani-imply 
<84351090+arunramani-im...@users.noreply.github.com>
---
 .../druid/java/util/metrics/CgroupCpuMonitor.java| 20 
 .../java/util/metrics/CgroupCpuMonitorTest.java  | 10 ++
 docs/configuration/index.md  |  5 -
 docs/operations/metrics.md   | 18 --
 website/.spelling|  1 +
 5 files changed, 47 insertions(+), 7 deletions(-)

diff --git 
a/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java 
b/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java
index 826465b..ac4d545 100644
--- 
a/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java
+++ 
b/core/src/main/java/org/apache/druid/java/util/metrics/CgroupCpuMonitor.java
@@ -65,12 +65,24 @@ public class CgroupCpuMonitor extends FeedDefiningMonitor
 emitter.emit(builder.build("cgroup/cpu/shares", cpuSnapshot.getShares()));
 emitter.emit(builder.build(
 "cgroup/cpu/cores_quota",
-cpuSnapshot.getPeriodUs() == 0
-? 0
-: ((double) cpuSnapshot.getQuotaUs()
-  ) / cpuSnapshot.getPeriodUs()
+computeProcessorQuota(cpuSnapshot.getQuotaUs(), 
cpuSnapshot.getPeriodUs())
 ));
 
 return true;
   }
+
+  /**
+   * Calculates the total cores allocated through quotas. A negative value 
indicates that no quota has been specified.
+   * We use -1 because that's the default value used in the cgroup.
+   *
+   * @param quotaUs  the cgroup quota value.
+   * @param periodUs the cgroup period value.
+   * @return the calculated processor quota, -1 if no quota or period set.
+   */
+  public static double computeProcessorQuota(long quotaUs, long periodUs)
+  {
+return quotaUs < 0 || periodUs == 0
+   ? -1
+   : (double) quotaUs / periodUs;
+  }
 }
diff --git 
a/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
 
b/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
index 4a05f5f..67c03d2 100644
--- 
a/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
+++ 
b/core/src/test/java/org/apache/druid/java/util/metrics/CgroupCpuMonitorTest.java
@@ -79,4 +79,14 @@ public class CgroupCpuMonitorTest
 Assert.assertEquals("cgroup/cpu/cores_quota", coresEvent.get("metric"));
 Assert.assertEquals(3.0D, coresEvent.get("value"));
   }
+
+  @Test
+  public void testQuotaCompute()
+  {
+Assert.assertEquals(-1, CgroupCpuMonitor.computeProcessorQuota(-1, 
10), 0);
+Assert.assertEquals(0, CgroupCpuMonitor.computeProcessorQuota(0, 10), 
0);
+Assert.assertEquals(-1, CgroupCpuMonitor.computeProcessorQuota(10, 0), 
0);
+Assert.assertEquals(2.0D, CgroupCpuMonitor.computeProcessorQuota(20, 
10), 0);
+Assert.assertEquals(0.5D, CgroupCpuMonitor.computeProcessorQuota(5, 
10), 0);
+  }
 }
diff --git a/docs/configuration/index.md b/docs/configuration/index.md
index 1d20029..c20d801 100644
--- a/docs/configuration/index.md
+++ b/docs/configuration/index.md
@@ -362,12 +362,15 @@ The following monitors are available:
 ||---|
 |`org.apache.druid.client.cache.CacheMonitor`|Emits metrics (to logs) about 
the segment results cache for Historical and Broker processes. Reports typical 
cache statistics include hits, misses, rates, and size (bytes and number of 
entries), as well as timeouts and and errors.|
 |`org.apache.druid.java.util.metrics.SysMonitor`|Reports on various system 
activities and statuses using the [SIGAR 
library](https://github.com/hyperic/sigar). Requires execute privileges on 
files in `java.io.tmpdir`. Do not set `java.io.tmpdir` to `noexec` when using 
`SysMonitor`.|
-|`org.apache.druid.server.metrics.HistoricalMetricsMonitor`|Reports statistics 
on Historical processes. Available only on Historical processes.|
 |`org.apache.druid

[druid] branch master updated: Simplify ITHttpInputSourceTest to mitigate flakiness (#11751)

2021-10-12 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 887cecf  Simplify ITHttpInputSourceTest to mitigate flakiness (#11751)
887cecf is described below

commit 887cecf29e8b813029911fb05745764cce155c94
Author: Agustin Gonzalez 
AuthorDate: Tue Oct 12 09:51:27 2021 -0700

Simplify ITHttpInputSourceTest to mitigate flakiness (#11751)

* Increment retry count to add more time for tests to pass

* Re-enable ITHttpInputSourceTest

* Restore original count

* This test is about input source, hash partitioning takes longer and not 
required thus changing to dynamic

* Further simplify by removing sketches
---
 .../druid/tests/indexer/ITHttpInputSourceTest.java |  2 -
 .../wikipedia_http_inputsource_queries.json| 87 +-
 .../indexer/wikipedia_http_inputsource_task.json   | 18 +
 3 files changed, 19 insertions(+), 88 deletions(-)

diff --git 
a/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
 
b/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
index c72f080..bb0d7c5 100644
--- 
a/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
+++ 
b/integration-tests/src/test/java/org/apache/druid/tests/indexer/ITHttpInputSourceTest.java
@@ -36,8 +36,6 @@ public class ITHttpInputSourceTest extends 
AbstractITBatchIndexTest
   private static final String INDEX_TASK = 
"/indexer/wikipedia_http_inputsource_task.json";
   private static final String INDEX_QUERIES_RESOURCE = 
"/indexer/wikipedia_http_inputsource_queries.json";
 
-  // Ignore while we debug...
-  @Test(enabled = false)
   public void doTest() throws IOException
   {
 final String indexDatasource = "wikipedia_http_inputsource_test_" + 
UUID.randomUUID();
diff --git 
a/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
 
b/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
index 11496c2..f0cbb1c 100644
--- 
a/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
+++ 
b/integration-tests/src/test/resources/indexer/wikipedia_http_inputsource_queries.json
@@ -16,82 +16,31 @@
 ]
 },
 {
-"description": "timeseries, datasketch aggs, all",
+"description": "simple aggr",
 "query":{
-"queryType" : "timeseries",
-"dataSource": "%%DATASOURCE%%",
-"granularity":"day",
-"intervals":[
-"2016-06-27/P1D"
-],
-"filter":null,
-"aggregations":[
+"queryType" : "topN",
+"dataSource" : "%%DATASOURCE%%",
+"intervals" : ["2016-06-27/2016-06-28"],
+"granularity" : "all",
+"dimension" : "page",
+"metric" : "count",
+"threshold" : 3,
+"aggregations" : [
 {
-"type": "HLLSketchMerge",
-"name": "approxCountHLL",
-"fieldName": "HLLSketchBuild",
-"lgK": 12,
-"tgtHllType": "HLL_4",
-"round": true
-},
-{
-"type":"thetaSketch",
-"name":"approxCountTheta",
-"fieldName":"thetaSketch",
-"size":16384,
-"shouldFinalize":true,
-"isInputThetaSketch":false,
-"errorBoundsStdDev":null
-},
-{
-"type":"quantilesDoublesSketch",
-"name":"quantilesSketch",
-"fieldName":"quantilesDoublesSketch",
-"k":128
+"type" : "count",
+"name" : "count"
 }
 ]
 },
 "expectedResults":[
 {
-"timestamp" : "2016-06-27T00:00:00.000Z",
-"result" : {
- 

[druid] branch master updated (9ca8f1e -> a96aed0)

2021-10-27 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 9ca8f1e  Remove IncrementalIndex template modifier (#11160)
 add a96aed0  Fix indefinite WAITING batch task when lock is revoked 
(#11788)

No new revisions were added by this update.

Summary of changes:
 .../common/actions/TimeChunkLockAcquireAction.java |  2 +-
 .../actions/TimeChunkLockTryAcquireAction.java |  3 +-
 .../common/task/AbstractBatchIndexTask.java|  4 ++
 .../common/task/AbstractFixedIntervalTask.java | 15 +-
 .../task/AppenderatorDriverRealtimeIndexTask.java  | 12 -
 .../indexing/common/task/HadoopIndexTask.java  | 18 ++-
 .../indexing/common/task/RealtimeIndexTask.java| 15 --
 .../SinglePhaseParallelIndexTaskRunner.java|  4 ++
 .../apache/druid/indexing/overlord/LockResult.java | 11 +++--
 .../druid/indexing/overlord/TaskLockbox.java   | 12 +++--
 .../SeekableStreamIndexTaskRunner.java | 12 -
 .../druid/indexing/overlord/TaskLifecycleTest.java | 55 ++
 12 files changed, 145 insertions(+), 18 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (7b4edc9 -> 5faa897)

2020-06-30 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 7b4edc9  Update web address to datasketches.apache.org (#10096)
 add 5faa897  Join filter pre-analysis simplifications and sanity checks. 
(#10104)

No new revisions were added by this update.

Summary of changes:
 .../druid/benchmark/JoinAndLookupBenchmark.java| 113 +---
 .../druid/query/planning/DataSourceAnalysis.java   |  52 ++--
 .../apache/druid/segment/join/HashJoinSegment.java |  13 +-
 .../join/HashJoinSegmentStorageAdapter.java|  48 ++--
 .../org/apache/druid/segment/join/Joinables.java   | 112 ++--
 .../segment/join/filter/JoinFilterAnalyzer.java|  80 +++---
 .../segment/join/filter/JoinFilterPreAnalysis.java |  67 +++--
 .../join/filter/JoinFilterPreAnalysisKey.java  |  97 +++
 .../druid/segment/join/filter/JoinableClauses.java |  25 +-
 .../filter/rewrite/JoinFilterPreAnalysisGroup.java | 149 ---
 .../filter/rewrite/JoinFilterRewriteConfig.java|  41 +++
 .../query/planning/DataSourceAnalysisTest.java |  30 ++-
 .../BaseHashJoinSegmentStorageAdapterTest.java |  40 ++-
 .../join/HashJoinSegmentStorageAdapterTest.java| 145 +-
 .../druid/segment/join/HashJoinSegmentTest.java|  28 +-
 .../druid/segment/join/JoinFilterAnalyzerTest.java | 298 +
 .../apache/druid/segment/join/JoinablesTest.java   |  89 --
 .../appenderator/SinkQuerySegmentWalker.java   |  12 +-
 .../druid/server/LocalQuerySegmentWalker.java  |  29 +-
 .../druid/server/coordination/ServerManager.java   |  12 +-
 .../druid/server/ClientQuerySegmentWalkerTest.java |   8 +-
 .../server/TestClusterQuerySegmentWalker.java  |  14 +-
 22 files changed, 662 insertions(+), 840 deletions(-)
 create mode 100644 
processing/src/main/java/org/apache/druid/segment/join/filter/JoinFilterPreAnalysisKey.java
 delete mode 100644 
processing/src/main/java/org/apache/druid/segment/join/filter/rewrite/JoinFilterPreAnalysisGroup.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Fix Stack overflow with infinite loop in ReduceExpressionsRule of HepProgram (#10120)

2020-07-01 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 1676ba2  Fix Stack overflow with infinite loop in 
ReduceExpressionsRule of HepProgram (#10120)
1676ba2 is described below

commit 1676ba22e300ea95cc92c0808f390aaa769546f9
Author: Maytas Monsereenusorn 
AuthorDate: Wed Jul 1 17:48:09 2020 -0700

Fix Stack overflow with infinite loop in ReduceExpressionsRule of 
HepProgram (#10120)

* Fix Stack overflow with SELECT ARRAY ['Hello', NULL]

* address comments
---
 .../apache/druid/sql/calcite/planner/Rules.java| 30 +-
 .../apache/druid/sql/calcite/CalciteQueryTest.java | 13 ++
 2 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/sql/src/main/java/org/apache/druid/sql/calcite/planner/Rules.java 
b/sql/src/main/java/org/apache/druid/sql/calcite/planner/Rules.java
index 03b1a31..c9135d5 100644
--- a/sql/src/main/java/org/apache/druid/sql/calcite/planner/Rules.java
+++ b/sql/src/main/java/org/apache/druid/sql/calcite/planner/Rules.java
@@ -26,10 +26,13 @@ import org.apache.calcite.plan.RelOptMaterialization;
 import org.apache.calcite.plan.RelOptPlanner;
 import org.apache.calcite.plan.RelOptRule;
 import org.apache.calcite.plan.RelTraitSet;
+import org.apache.calcite.plan.hep.HepProgram;
+import org.apache.calcite.plan.hep.HepProgramBuilder;
 import org.apache.calcite.plan.volcano.AbstractConverter;
 import org.apache.calcite.rel.RelNode;
 import org.apache.calcite.rel.core.RelFactories;
 import org.apache.calcite.rel.metadata.DefaultRelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataProvider;
 import org.apache.calcite.rel.rules.AggregateCaseToFilterRule;
 import org.apache.calcite.rel.rules.AggregateExpandDistinctAggregatesRule;
 import org.apache.calcite.rel.rules.AggregateJoinTransposeRule;
@@ -85,6 +88,16 @@ public class Rules
   public static final int DRUID_CONVENTION_RULES = 0;
   public static final int BINDABLE_CONVENTION_RULES = 1;
 
+  // Due to Calcite bug (CALCITE-3845), ReduceExpressionsRule can considered 
expression which is the same as the
+  // previous input expression as reduced. Basically, the expression is 
actually not reduced but is still considered as
+  // reduced. Hence, this resulted in an infinite loop of Calcite trying to 
reducing the same expression over and over.
+  // Calcite 1.23.0 fixes this issue by not consider expression as reduced if 
this case happens. However, while
+  // we are still using Calcite 1.21.0, a workaround is to limit the number of 
pattern matches to avoid infinite loop.
+  private static final String HEP_DEFAULT_MATCH_LIMIT_CONFIG_STRING = 
"druid.sql.planner.hepMatchLimit";
+  private static final int HEP_DEFAULT_MATCH_LIMIT = Integer.valueOf(
+  System.getProperty(HEP_DEFAULT_MATCH_LIMIT_CONFIG_STRING, "1200")
+  );
+
   // Rules from RelOptUtil's registerBaseRules, minus:
   //
   // 1) AggregateExpandDistinctAggregatesRule (it'll be added back later if 
approximate count distinct is disabled)
@@ -191,12 +204,14 @@ public class Rules
 
   public static List programs(final PlannerContext plannerContext, 
final QueryMaker queryMaker)
   {
+
+
 // Program that pre-processes the tree before letting the full-on 
VolcanoPlanner loose.
 final Program preProgram =
 Programs.sequence(
 Programs.subQuery(DefaultRelMetadataProvider.INSTANCE),
 DecorrelateAndTrimFieldsProgram.INSTANCE,
-Programs.hep(REDUCTION_RULES, true, 
DefaultRelMetadataProvider.INSTANCE)
+buildHepProgram(REDUCTION_RULES, true, 
DefaultRelMetadataProvider.INSTANCE, HEP_DEFAULT_MATCH_LIMIT)
 );
 
 return ImmutableList.of(
@@ -205,6 +220,19 @@ public class Rules
 );
   }
 
+  private static Program buildHepProgram(Iterable rules,
+ boolean noDag,
+ RelMetadataProvider metadataProvider,
+ int matchLimit)
+  {
+final HepProgramBuilder builder = HepProgram.builder();
+builder.addMatchLimit(matchLimit);
+for (RelOptRule rule : rules) {
+  builder.addRuleInstance(rule);
+}
+return Programs.of(builder.build(), noDag, metadataProvider);
+  }
+
   private static List druidConventionRuleSet(
   final PlannerContext plannerContext,
   final QueryMaker queryMaker
diff --git 
a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java 
b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
index 17c970f..483cdd2 100644
--- a/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
+++ b/sql/src/test/java/org/apache/druid/sql/calcite/CalciteQueryTest.java
@@ -140,6 +140,19 @@ public class CalciteQueryTest extends BaseCa

[druid] branch master updated (ddda2a4 -> c86e7ce)

2020-07-06 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from ddda2a4  VersionedIntervalTimeline: Fix thread-unsafe call to 
"lookup". (#10130)
 add c86e7ce  bump version to 0.20.0-SNAPSHOT (#10124)

No new revisions were added by this update.

Summary of changes:
 benchmarks/pom.xml   | 4 ++--
 cloud/aws-common/pom.xml | 2 +-
 cloud/gcp-common/pom.xml | 2 +-
 core/pom.xml | 2 +-
 distribution/pom.xml | 2 +-
 extendedset/pom.xml  | 2 +-
 extensions-contrib/aliyun-oss-extensions/pom.xml | 2 +-
 extensions-contrib/ambari-metrics-emitter/pom.xml| 2 +-
 extensions-contrib/cassandra-storage/pom.xml | 2 +-
 extensions-contrib/cloudfiles-extensions/pom.xml | 2 +-
 extensions-contrib/distinctcount/pom.xml | 2 +-
 extensions-contrib/dropwizard-emitter/pom.xml| 2 +-
 extensions-contrib/gce-extensions/pom.xml| 2 +-
 extensions-contrib/graphite-emitter/pom.xml  | 2 +-
 extensions-contrib/influx-extensions/pom.xml | 2 +-
 extensions-contrib/influxdb-emitter/pom.xml  | 2 +-
 extensions-contrib/kafka-emitter/pom.xml | 2 +-
 extensions-contrib/materialized-view-maintenance/pom.xml | 2 +-
 extensions-contrib/materialized-view-selection/pom.xml   | 2 +-
 extensions-contrib/momentsketch/pom.xml  | 2 +-
 extensions-contrib/moving-average-query/pom.xml  | 2 +-
 extensions-contrib/opentsdb-emitter/pom.xml  | 2 +-
 extensions-contrib/redis-cache/pom.xml   | 2 +-
 extensions-contrib/sqlserver-metadata-storage/pom.xml| 2 +-
 extensions-contrib/statsd-emitter/pom.xml| 2 +-
 extensions-contrib/tdigestsketch/pom.xml | 2 +-
 extensions-contrib/thrift-extensions/pom.xml | 2 +-
 extensions-contrib/time-min-max/pom.xml  | 2 +-
 extensions-contrib/virtual-columns/pom.xml   | 2 +-
 extensions-core/avro-extensions/pom.xml  | 2 +-
 extensions-core/azure-extensions/pom.xml | 2 +-
 extensions-core/datasketches/pom.xml | 2 +-
 extensions-core/druid-basic-security/pom.xml | 2 +-
 extensions-core/druid-bloom-filter/pom.xml   | 2 +-
 extensions-core/druid-kerberos/pom.xml   | 2 +-
 extensions-core/druid-pac4j/pom.xml  | 2 +-
 extensions-core/druid-ranger-security/pom.xml| 2 +-
 extensions-core/ec2-extensions/pom.xml   | 2 +-
 extensions-core/google-extensions/pom.xml| 2 +-
 extensions-core/hdfs-storage/pom.xml | 2 +-
 extensions-core/histogram/pom.xml| 2 +-
 extensions-core/kafka-extraction-namespace/pom.xml   | 2 +-
 extensions-core/kafka-indexing-service/pom.xml   | 2 +-
 extensions-core/kinesis-indexing-service/pom.xml | 2 +-
 extensions-core/lookups-cached-global/pom.xml| 2 +-
 extensions-core/lookups-cached-single/pom.xml| 2 +-
 extensions-core/mysql-metadata-storage/pom.xml   | 2 +-
 extensions-core/orc-extensions/pom.xml   | 2 +-
 extensions-core/parquet-extensions/pom.xml   | 2 +-
 extensions-core/postgresql-metadata-storage/pom.xml  | 2 +-
 extensions-core/protobuf-extensions/pom.xml  | 2 +-
 extensions-core/s3-extensions/pom.xml| 2 +-
 extensions-core/simple-client-sslcontext/pom.xml | 2 +-
 extensions-core/stats/pom.xml| 2 +-
 hll/pom.xml  | 2 +-
 indexing-hadoop/pom.xml  | 2 +-
 indexing-service/pom.xml | 2 +-
 integration-tests/pom.xml| 2 +-
 pom.xml  | 2 +-
 processing/pom.xml   | 2 +-
 server/pom.xml   | 2 +-
 services/pom.xml | 2 +-
 sql/pom.xml  | 2 +-
 web-console/pom.xml  | 2 +-
 website/pom.xml  | 2 +-
 65 files changed, 66 insertions(+), 66 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (54a8fb8 -> b7f4ce7)

2020-07-10 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 54a8fb8  Fix flaky tests in DruidCoordinatorTest (#10157)
 add b7f4ce7  Update ambari-metrics-common to version 2.6.1.0.0  (#10165)

No new revisions were added by this update.

Summary of changes:
 extensions-contrib/ambari-metrics-emitter/pom.xml  | 10 +-
 .../ambari/metrics/AmbariMetricsEmitter.java   | 37 +--
 ...nfigTest.java => AmbariMetricsEmitterTest.java} | 42 +++---
 3 files changed, 48 insertions(+), 41 deletions(-)
 copy 
extensions-contrib/ambari-metrics-emitter/src/test/java/org/apache/druid/emitter/ambari/metrics/{AmbariMetricsEmitterConfigTest.java
 => AmbariMetricsEmitterTest.java} (56%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Cluster wide default query context setting (#10208)

2020-07-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 574b062  Cluster wide default query context setting (#10208)
574b062 is described below

commit 574b062f1f6f1cf0637d99d4ea540a95971c7489
Author: Maytas Monsereenusorn 
AuthorDate: Wed Jul 29 15:19:18 2020 -0700

Cluster wide default query context setting (#10208)

* Cluster wide default query context setting

* Cluster wide default query context setting

* Cluster wide default query context setting

* add docs

* fix docs

* update props

* fix checkstyle

* fix checkstyle

* fix checkstyle

* update docs

* address comments

* fix checkstyle

* fix checkstyle

* fix checkstyle

* fix checkstyle

* fix checkstyle

* fix NPE
---
 .../benchmark/GroupByTypeInterfaceBenchmark.java   |   2 -
 .../query/CachingClusteredClientBenchmark.java |   2 -
 .../druid/benchmark/query/GroupByBenchmark.java|   2 -
 docs/configuration/index.md|  37 +--
 docs/querying/query-context.md |   7 +-
 .../druid/segment/MapVirtualColumnGroupByTest.java |   2 -
 .../{QueryConfig.java => DefaultQueryConfig.java}  |  45 
 .../java/org/apache/druid/query/QueryContexts.java |  11 ++
 .../epinephelinae/GroupByQueryEngineV2.java|  10 +-
 .../epinephelinae/vector/VectorGroupByEngine.java  |   7 +-
 .../query/groupby/strategy/GroupByStrategyV2.java  |   7 +-
 .../query/timeseries/TimeseriesQueryEngine.java|  16 +--
 .../apache/druid/query/DefaultQueryConfigTest.java |  82 ++
 ...GroupByLimitPushDownInsufficientBufferTest.java |   6 -
 .../GroupByLimitPushDownMultiNodeMergeTest.java|   6 -
 .../query/groupby/GroupByMultiSegmentTest.java |   2 -
 .../query/groupby/GroupByQueryMergeBufferTest.java |   2 -
 .../groupby/GroupByQueryRunnerFailureTest.java |   2 -
 .../query/groupby/GroupByQueryRunnerTest.java  |   2 -
 .../query/groupby/NestedQueryPushDownTest.java |   6 -
 .../apache/druid/query/search/QueryConfigTest.java |  80 --
 .../apache/druid/guice/QueryToolChestModule.java   |   4 +-
 .../org/apache/druid/server/QueryLifecycle.java|  14 ++-
 .../apache/druid/server/QueryLifecycleFactory.java |   8 +-
 .../org/apache/druid/server/QueryResourceTest.java | 122 -
 .../druid/sql/calcite/util/CalciteTests.java   |   5 +-
 26 files changed, 307 insertions(+), 182 deletions(-)

diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/GroupByTypeInterfaceBenchmark.java
 
b/benchmarks/src/test/java/org/apache/druid/benchmark/GroupByTypeInterfaceBenchmark.java
index d12ff87..d05da5e 100644
--- 
a/benchmarks/src/test/java/org/apache/druid/benchmark/GroupByTypeInterfaceBenchmark.java
+++ 
b/benchmarks/src/test/java/org/apache/druid/benchmark/GroupByTypeInterfaceBenchmark.java
@@ -40,7 +40,6 @@ import org.apache.druid.offheap.OffheapBufferGenerator;
 import org.apache.druid.query.DruidProcessingConfig;
 import org.apache.druid.query.FinalizeResultsQueryRunner;
 import org.apache.druid.query.Query;
-import org.apache.druid.query.QueryConfig;
 import org.apache.druid.query.QueryPlus;
 import org.apache.druid.query.QueryRunner;
 import org.apache.druid.query.QueryRunnerFactory;
@@ -399,7 +398,6 @@ public class GroupByTypeInterfaceBenchmark
 new GroupByStrategyV2(
 druidProcessingConfig,
 configSupplier,
-QueryConfig::new,
 bufferPool,
 mergePool,
 new ObjectMapper(new SmileFactory()),
diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/CachingClusteredClientBenchmark.java
 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/CachingClusteredClientBenchmark.java
index 877aca5..6c53abd 100644
--- 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/CachingClusteredClientBenchmark.java
+++ 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/CachingClusteredClientBenchmark.java
@@ -61,7 +61,6 @@ import org.apache.druid.query.Druids;
 import org.apache.druid.query.FinalizeResultsQueryRunner;
 import org.apache.druid.query.FluentQueryRunnerBuilder;
 import org.apache.druid.query.Query;
-import org.apache.druid.query.QueryConfig;
 import org.apache.druid.query.QueryContexts;
 import org.apache.druid.query.QueryPlus;
 import org.apache.druid.query.QueryRunner;
@@ -373,7 +372,6 @@ public class CachingClusteredClientBenchmark
 new GroupByStrategyV2(
 processingConfig,
 configSupplier,
-QueryConfig::new,
 bufferPool,
 mergeBufferPool,
 mapper,
diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/qu

[druid] branch master updated (0891b1f -> 9a81740)

2020-08-18 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 0891b1f  Add note about aggregations on floats (#10285)
 add 9a81740  Don't log the entire task spec (#10278)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/druid/common/utils/IdUtils.java |  47 ++
 .../org/apache/druid/common/utils/IdUtilsTest.java |  49 ++
 .../druid/indexing/common/task/AbstractTask.java   |  39 +---
 .../druid/indexing/common/task/CompactionTask.java |   4 +-
 .../batch/parallel/ParallelIndexPhaseRunner.java   |  28 +-
 .../parallel/ParallelIndexSupervisorTask.java  |  16 +---
 .../common/task/batch/parallel/TaskMonitor.java|   4 +-
 .../task/ClientCompactionTaskQuerySerdeTest.java   | 100 -
 ...ClientKillUnusedSegmentsTaskQuerySerdeTest.java |  80 +
 .../druid/indexing/common/task/TaskSerdeTest.java  |  37 
 .../AbstractParallelIndexSupervisorTaskTest.java   |   2 +-
 .../ParallelIndexSupervisorTaskKillTest.java   |   2 +-
 .../ParallelIndexSupervisorTaskResourceTest.java   |   2 +-
 .../task/batch/parallel/TaskMonitorTest.java   |   2 +-
 .../client/indexing/ClientCompactionTaskQuery.java |  25 --
 .../ClientKillUnusedSegmentsTaskQuery.java |  38 +++-
 .../druid/client/indexing/ClientTaskQuery.java |   6 +-
 .../client/indexing/HttpIndexingServiceClient.java |  39 +---
 .../client/indexing/IndexingServiceClient.java |   5 +-
 .../server/coordinator/duty/CompactSegments.java   |   1 +
 .../coordinator/duty/KillUnusedSegments.java   |   2 +-
 .../druid/server/http/DataSourcesResource.java |   2 +-
 ... => ClientKillUnusedSegmentsTaskQueryTest.java} |  14 ++-
 .../client/indexing/NoopIndexingServiceClient.java |   5 +-
 .../coordinator/duty/CompactSegmentsTest.java  |   9 +-
 .../druid/server/http/DataSourcesResourceTest.java |   2 +-
 26 files changed, 401 insertions(+), 159 deletions(-)
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/ClientKillUnusedSegmentsTaskQuerySerdeTest.java
 rename 
server/src/test/java/org/apache/druid/client/indexing/{ClientKillUnusedSegmentsQueryTest.java
 => ClientKillUnusedSegmentsTaskQueryTest.java} (82%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (21703d8 -> f82fd22)

2020-08-26 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 21703d8  Fix handling of 'join' on top of 'union' datasources. (#10318)
 add f82fd22  Move tools for indexing to TaskToolbox instead of injecting 
them in constructor (#10308)

No new revisions were added by this update.

Summary of changes:
 .../IncrementalPublishingKafkaIndexTaskRunner.java |  10 --
 .../druid/indexing/kafka/KafkaIndexTask.java   |  19 +--
 .../indexing/kafka/supervisor/KafkaSupervisor.java |   6 +-
 .../druid/indexing/kafka/KafkaIndexTaskTest.java   |  17 ++-
 .../kafka/supervisor/KafkaSupervisorTest.java  |   7 +-
 .../druid/indexing/kinesis/KinesisIndexTask.java   |  19 +--
 .../indexing/kinesis/KinesisIndexTaskRunner.java   |  10 --
 .../kinesis/supervisor/KinesisSupervisor.java  |   6 +-
 .../kinesis/KinesisIndexTaskSerdeTest.java |   4 -
 .../indexing/kinesis/KinesisIndexTaskTest.java |  31 +++--
 .../kinesis/supervisor/KinesisSupervisorTest.java  |   7 +-
 .../apache/druid/indexing/common/TaskToolbox.java  |  78 +++-
 .../druid/indexing/common/TaskToolboxFactory.java  |  47 ++-
 .../task/AppenderatorDriverRealtimeIndexTask.java  |  48 ++-
 .../druid/indexing/common/task/CompactionTask.java |  84 +
 .../indexing/common/task/HadoopIndexTask.java  |   5 +-
 .../druid/indexing/common/task/IndexTask.java  |  88 +
 .../GeneratedPartitionsMetadataReport.java |   2 +-
 .../InputSourceSplitParallelIndexTaskRunner.java   |   7 +-
 .../batch/parallel/LegacySinglePhaseSubTask.java   |  14 +--
 .../batch/parallel/ParallelIndexPhaseRunner.java   |  14 +--
 .../parallel/ParallelIndexSupervisorTask.java  |  59 +++--
 ...mensionDistributionParallelIndexTaskRunner.java |  39 +-
 .../parallel/PartialDimensionDistributionTask.java |  23 +---
 ...GenericSegmentMergeParallelIndexTaskRunner.java |  11 +-
 .../parallel/PartialGenericSegmentMergeTask.java   |  13 +-
 ...HashSegmentGenerateParallelIndexTaskRunner.java |  11 +-
 .../parallel/PartialHashSegmentGenerateTask.java   |  12 +-
 ...angeSegmentGenerateParallelIndexTaskRunner.java |   9 +-
 .../parallel/PartialRangeSegmentGenerateTask.java  |  12 +-
 .../batch/parallel/PartialSegmentGenerateTask.java |  21 +---
 .../batch/parallel/PartialSegmentMergeTask.java|  19 +--
 .../SinglePhaseParallelIndexTaskRunner.java|   7 +-
 .../task/batch/parallel/SinglePhaseSubTask.java|  21 +---
 .../batch/parallel/SinglePhaseSubTaskSpec.java |  11 +-
 .../seekablestream/SeekableStreamIndexTask.java|  24 +---
 .../SeekableStreamIndexTaskRunner.java |  40 +++---
 .../druid/indexing/common/TaskToolboxTest.java |  13 ++
 .../AppenderatorDriverRealtimeIndexTaskTest.java   |  35 ++
 .../task/ClientCompactionTaskQuerySerdeTest.java   |   9 +-
 .../common/task/CompactionTaskParallelRunTest.java |  60 +
 .../common/task/CompactionTaskRunTest.java | 104 +++
 .../indexing/common/task/CompactionTaskTest.java   |  86 -
 .../druid/indexing/common/task/IndexTaskTest.java  | 139 -
 .../indexing/common/task/IngestionTestBase.java|  11 ++
 .../common/task/RealtimeIndexTaskTest.java |  22 ++--
 .../druid/indexing/common/task/TaskSerdeTest.java  |   8 --
 .../AbstractMultiPhaseParallelIndexingTest.java|   7 +-
 .../AbstractParallelIndexSupervisorTaskTest.java   |  82 +++-
 .../parallel/ParallelIndexPhaseRunnerTest.java |   7 +-
 .../ParallelIndexSupervisorTaskKillTest.java   |  47 +++
 .../ParallelIndexSupervisorTaskResourceTest.java   |  36 ++
 .../ParallelIndexSupervisorTaskSerdeTest.java  |  21 +---
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   5 -
 .../task/batch/parallel/PartialCompactionTest.java |  13 +-
 .../PartialDimensionDistributionTaskTest.java  |  49 
 .../PartialGenericSegmentMergeTaskTest.java|   5 +-
 .../PartialHashSegmentGenerateTaskTest.java|   5 +-
 .../PartialRangeSegmentGenerateTaskTest.java   |   9 +-
 .../parallel/SinglePhaseParallelIndexingTest.java  |  13 +-
 .../overlord/SingleTaskBackgroundRunnerTest.java   |  12 ++
 .../druid/indexing/overlord/TaskLifecycleTest.java |  66 +-
 .../SeekableStreamSupervisorStateTest.java |  14 +--
 .../indexing/worker/WorkerTaskManagerTest.java |  28 ++---
 .../indexing/worker/WorkerTaskMonitorTest.java |  15 ++-
 .../org/apache/druid/cli/CliMiddleManager.java |   3 +-
 .../java/org/apache/druid/cli/CliOverlord.java |   3 +-
 67 files changed, 549 insertions(+), 1233 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (e5f0da3 -> a5c46dc)

2020-09-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from e5f0da3  Fix stringFirst/stringLast rollup during ingestion (#10332)
 add a5c46dc  Add vectorization for druid-histogram extension (#10304)

No new revisions were added by this update.

Summary of changes:
 docs/querying/query-context.md |   3 +-
 .../ApproximateHistogramAggregatorFactory.java |  22 +++
 .../ApproximateHistogramBufferAggregator.java  |  34 +---
 ...ApproximateHistogramBufferAggregatorHelper.java |  70 +++
 ...proximateHistogramFoldingAggregatorFactory.java |  31 ++-
 ...pproximateHistogramFoldingBufferAggregator.java |  40 +---
 ...ateHistogramFoldingBufferAggregatorHelper.java} |  71 +++
 ...pproximateHistogramFoldingVectorAggregator.java |  90 +
 .../ApproximateHistogramVectorAggregator.java  |  51 ++---
 .../histogram/FixedBucketsHistogram.java   |  30 +++
 .../histogram/FixedBucketsHistogramAggregator.java |  19 +-
 .../FixedBucketsHistogramAggregatorFactory.java|  33 
 .../FixedBucketsHistogramBufferAggregator.java |  37 +---
 ...ixedBucketsHistogramBufferAggregatorHelper.java |  88 +
 .../FixedBucketsHistogramVectorAggregator.java |  99 ++
 ...ximateHistogramFoldingVectorAggregatorTest.java | 143 ++
 .../ApproximateHistogramVectorAggregatorTest.java  | 152 +++
 .../histogram/FixedBucketsHistogramTest.java   |  88 +
 .../FixedBucketsHistogramVectorAggregatorTest.java | 209 +
 .../histogram/sql/QuantileSqlAggregatorTest.java   |   2 +
 website/.spelling  |   1 +
 21 files changed, 1131 insertions(+), 182 deletions(-)
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramBufferAggregatorHelper.java
 copy 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/{ApproximateHistogramFoldingBufferAggregator.java
 => ApproximateHistogramFoldingBufferAggregatorHelper.java} (55%)
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramFoldingVectorAggregator.java
 copy 
processing/src/main/java/org/apache/druid/query/aggregation/FloatMinVectorAggregator.java
 => 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramVectorAggregator.java
 (55%)
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/FixedBucketsHistogramBufferAggregatorHelper.java
 create mode 100644 
extensions-core/histogram/src/main/java/org/apache/druid/query/aggregation/histogram/FixedBucketsHistogramVectorAggregator.java
 create mode 100644 
extensions-core/histogram/src/test/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramFoldingVectorAggregatorTest.java
 create mode 100644 
extensions-core/histogram/src/test/java/org/apache/druid/query/aggregation/histogram/ApproximateHistogramVectorAggregatorTest.java
 create mode 100644 
extensions-core/histogram/src/test/java/org/apache/druid/query/aggregation/histogram/FixedBucketsHistogramVectorAggregatorTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (89160c2 -> cb30b1f)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 89160c2  better query view initial state (#10431)
 add cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |   4 +-
 .../partition/HashBasedNumberedShardSpec.java  |   5 +-
 docs/ingestion/native-batch.md |  11 +-
 .../druid/indexing/common/task/IndexTask.java  |   6 +-
 .../apache/druid/indexing/common/task/Task.java|   2 +
 .../batch/parallel/DimensionCardinalityReport.java | 109 +
 .../parallel/ParallelIndexSupervisorTask.java  | 126 ++-
 ...mensionCardinalityParallelIndexTaskRunner.java} |  21 +-
 .../parallel/PartialDimensionCardinalityTask.java  | 245 
 ...HashSegmentGenerateParallelIndexTaskRunner.java |   9 +-
 .../parallel/PartialHashSegmentGenerateTask.java   |  23 +-
 .../common/task/batch/parallel/SubTaskReport.java  |   1 +
 .../AbstractParallelIndexSupervisorTaskTest.java   |   3 +-
 .../parallel/DimensionCardinalityReportTest.java   | 142 
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  36 ++-
 .../ParallelIndexSupervisorTaskSerdeTest.java  |  15 +-
 ...va => PartialDimensionCardinalityTaskTest.java} | 246 +++--
 .../PartialHashSegmentGenerateTaskTest.java|   6 +-
 .../parallel/PerfectRollupWorkerTaskTest.java  |   5 +-
 .../tests/indexer/AbstractITBatchIndexTest.java|   2 +
 20 files changed, 791 insertions(+), 226 deletions(-)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReport.java
 copy 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionParallelIndexTaskRunner.java
 => PartialDimensionCardinalityParallelIndexTaskRunner.java} (75%)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/PartialDimensionCardinalityTask.java
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReportTest.java
 copy 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionTaskTest.java
 => PartialDimensionCardinalityTaskTest.java} (53%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (89160c2 -> cb30b1f)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 89160c2  better query view initial state (#10431)
 add cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |   4 +-
 .../partition/HashBasedNumberedShardSpec.java  |   5 +-
 docs/ingestion/native-batch.md |  11 +-
 .../druid/indexing/common/task/IndexTask.java  |   6 +-
 .../apache/druid/indexing/common/task/Task.java|   2 +
 .../batch/parallel/DimensionCardinalityReport.java | 109 +
 .../parallel/ParallelIndexSupervisorTask.java  | 126 ++-
 ...mensionCardinalityParallelIndexTaskRunner.java} |  21 +-
 .../parallel/PartialDimensionCardinalityTask.java  | 245 
 ...HashSegmentGenerateParallelIndexTaskRunner.java |   9 +-
 .../parallel/PartialHashSegmentGenerateTask.java   |  23 +-
 .../common/task/batch/parallel/SubTaskReport.java  |   1 +
 .../AbstractParallelIndexSupervisorTaskTest.java   |   3 +-
 .../parallel/DimensionCardinalityReportTest.java   | 142 
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  36 ++-
 .../ParallelIndexSupervisorTaskSerdeTest.java  |  15 +-
 ...va => PartialDimensionCardinalityTaskTest.java} | 246 +++--
 .../PartialHashSegmentGenerateTaskTest.java|   6 +-
 .../parallel/PerfectRollupWorkerTaskTest.java  |   5 +-
 .../tests/indexer/AbstractITBatchIndexTest.java|   2 +
 20 files changed, 791 insertions(+), 226 deletions(-)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReport.java
 copy 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionParallelIndexTaskRunner.java
 => PartialDimensionCardinalityParallelIndexTaskRunner.java} (75%)
 create mode 100644 
indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/PartialDimensionCardinalityTask.java
 create mode 100644 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/DimensionCardinalityReportTest.java
 copy 
indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/{PartialDimensionDistributionTaskTest.java
 => PartialDimensionCardinalityTaskTest.java} (53%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (cb30b1f -> 0cc9eb4)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)
 add 0cc9eb4  Store hash partition function in dataSegment and allow 
segment pruning only when hash partition function is provided (#10288)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |  31 ++-
 .../BuildingHashBasedNumberedShardSpec.java|  17 +-
 .../timeline/partition/BuildingShardSpec.java  |   7 -
 .../HashBasedNumberedPartialShardSpec.java |  19 +-
 .../partition/HashBasedNumberedShardSpec.java  | 245 ++---
 .../timeline/partition/HashBucketShardSpec.java|  40 +++-
 .../timeline/partition/HashPartitionFunction.java  |  62 ++
 .../druid/timeline/partition/HashPartitioner.java  | 101 +
 .../druid/timeline/partition/LinearShardSpec.java  |   6 -
 .../druid/timeline/partition/NoneShardSpec.java|   6 -
 .../partition/NumberedOverwriteShardSpec.java  |   6 -
 .../timeline/partition/NumberedShardSpec.java  |   6 -
 .../timeline/partition/RangeBucketShardSpec.java   |  13 +-
 .../apache/druid/timeline/partition/ShardSpec.java |   4 -
 .../partition/SingleDimensionShardSpec.java|   7 +-
 .../org/apache/druid/timeline/DataSegmentTest.java |   7 -
 .../BuildingHashBasedNumberedShardSpecTest.java|  22 +-
 .../HashBasedNumberedPartialShardSpecTest.java |  14 +-
 .../partition/HashBasedNumberedShardSpecTest.java  | 245 ++---
 .../partition/HashBucketShardSpecTest.java |  35 ++-
 .../partition/NumberedOverwriteShardSpecTest.java  |   2 +-
 .../timeline/partition/NumberedShardSpecTest.java  |   2 +-
 .../partition/PartitionHolderCompletenessTest.java |   6 +-
 .../partition/SingleDimensionShardSpecTest.java|   4 +-
 docs/ingestion/hadoop.md   |  11 +
 docs/ingestion/index.md|   2 +-
 docs/ingestion/native-batch.md |  23 +-
 docs/querying/query-context.md |   1 +
 .../MaterializedViewSupervisorTest.java|  16 +-
 indexing-hadoop/pom.xml|   5 +
 .../indexer/DetermineHashedPartitionsJob.java  |  13 ++
 .../HadoopDruidDetermineConfigurationJob.java  |   5 +
 .../druid/indexer/BatchDeltaIngestionTest.java |  11 +-
 .../indexer/DetermineHashedPartitionsJobTest.java  |  39 +++-
 .../HadoopDruidDetermineConfigurationJobTest.java  | 127 +++
 .../indexer/HadoopDruidIndexerConfigTest.java  |  19 +-
 .../druid/indexer/IndexGeneratorJobTest.java   |  20 +-
 .../partitions/HashedPartitionsSpecTest.java   |  11 +
 .../parallel/PartialDimensionCardinalityTask.java  |   9 +-
 .../batch/partition/HashPartitionAnalysis.java |   1 +
 .../common/actions/SegmentAllocateActionTest.java  |  10 +-
 .../druid/indexing/common/task/IndexTaskTest.java  |  78 ++-
 .../druid/indexing/common/task/ShardSpecsTest.java |   5 +-
 .../batch/parallel/GenericPartitionStatTest.java   |   2 +
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  31 ++-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |  10 +-
 .../parallel/ParallelIndexTestingFactory.java  |   2 +
 .../parallel/PerfectRollupWorkerTaskTest.java  |   1 +
 .../druid/indexing/overlord/TaskLockboxTest.java   |   4 +-
 .../druid/tests/hadoop/ITHadoopIndexTest.java  |   2 +
 .../indexer/ITPerfectRollupParallelIndexTest.java  |   4 +-
 .../java/org/apache/druid/query/QueryContexts.java |   6 +
 .../org/apache/druid/query/QueryContextsTest.java  |  24 ++
 .../druid/client/CachingClusteredClient.java   |  18 +-
 .../druid/client/CachingClusteredClientTest.java   | 238 ++--
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |   4 +-
 .../appenderator/SegmentPublisherHelperTest.java   |  40 +++-
 .../coordinator/duty/CompactSegmentsTest.java  |   1 +
 website/.spelling  |   2 +
 59 files changed, 1311 insertions(+), 391 deletions(-)
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitionFunction.java
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitioner.java
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/HadoopDruidDetermineConfigurationJobTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (cb30b1f -> 0cc9eb4)

2020-09-24 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from cb30b1f  Automatically determine numShards for parallel ingestion hash 
partitioning (#10419)
 add 0cc9eb4  Store hash partition function in dataSegment and allow 
segment pruning only when hash partition function is provided (#10288)

No new revisions were added by this update.

Summary of changes:
 .../indexer/partitions/HashedPartitionsSpec.java   |  31 ++-
 .../BuildingHashBasedNumberedShardSpec.java|  17 +-
 .../timeline/partition/BuildingShardSpec.java  |   7 -
 .../HashBasedNumberedPartialShardSpec.java |  19 +-
 .../partition/HashBasedNumberedShardSpec.java  | 245 ++---
 .../timeline/partition/HashBucketShardSpec.java|  40 +++-
 .../timeline/partition/HashPartitionFunction.java  |  62 ++
 .../druid/timeline/partition/HashPartitioner.java  | 101 +
 .../druid/timeline/partition/LinearShardSpec.java  |   6 -
 .../druid/timeline/partition/NoneShardSpec.java|   6 -
 .../partition/NumberedOverwriteShardSpec.java  |   6 -
 .../timeline/partition/NumberedShardSpec.java  |   6 -
 .../timeline/partition/RangeBucketShardSpec.java   |  13 +-
 .../apache/druid/timeline/partition/ShardSpec.java |   4 -
 .../partition/SingleDimensionShardSpec.java|   7 +-
 .../org/apache/druid/timeline/DataSegmentTest.java |   7 -
 .../BuildingHashBasedNumberedShardSpecTest.java|  22 +-
 .../HashBasedNumberedPartialShardSpecTest.java |  14 +-
 .../partition/HashBasedNumberedShardSpecTest.java  | 245 ++---
 .../partition/HashBucketShardSpecTest.java |  35 ++-
 .../partition/NumberedOverwriteShardSpecTest.java  |   2 +-
 .../timeline/partition/NumberedShardSpecTest.java  |   2 +-
 .../partition/PartitionHolderCompletenessTest.java |   6 +-
 .../partition/SingleDimensionShardSpecTest.java|   4 +-
 docs/ingestion/hadoop.md   |  11 +
 docs/ingestion/index.md|   2 +-
 docs/ingestion/native-batch.md |  23 +-
 docs/querying/query-context.md |   1 +
 .../MaterializedViewSupervisorTest.java|  16 +-
 indexing-hadoop/pom.xml|   5 +
 .../indexer/DetermineHashedPartitionsJob.java  |  13 ++
 .../HadoopDruidDetermineConfigurationJob.java  |   5 +
 .../druid/indexer/BatchDeltaIngestionTest.java |  11 +-
 .../indexer/DetermineHashedPartitionsJobTest.java  |  39 +++-
 .../HadoopDruidDetermineConfigurationJobTest.java  | 127 +++
 .../indexer/HadoopDruidIndexerConfigTest.java  |  19 +-
 .../druid/indexer/IndexGeneratorJobTest.java   |  20 +-
 .../partitions/HashedPartitionsSpecTest.java   |  11 +
 .../parallel/PartialDimensionCardinalityTask.java  |   9 +-
 .../batch/partition/HashPartitionAnalysis.java |   1 +
 .../common/actions/SegmentAllocateActionTest.java  |  10 +-
 .../druid/indexing/common/task/IndexTaskTest.java  |  78 ++-
 .../druid/indexing/common/task/ShardSpecsTest.java |   5 +-
 .../batch/parallel/GenericPartitionStatTest.java   |   2 +
 ...ashPartitionMultiPhaseParallelIndexingTest.java |  31 ++-
 .../parallel/ParallelIndexSupervisorTaskTest.java  |  10 +-
 .../parallel/ParallelIndexTestingFactory.java  |   2 +
 .../parallel/PerfectRollupWorkerTaskTest.java  |   1 +
 .../druid/indexing/overlord/TaskLockboxTest.java   |   4 +-
 .../druid/tests/hadoop/ITHadoopIndexTest.java  |   2 +
 .../indexer/ITPerfectRollupParallelIndexTest.java  |   4 +-
 .../java/org/apache/druid/query/QueryContexts.java |   6 +
 .../org/apache/druid/query/QueryContextsTest.java  |  24 ++
 .../druid/client/CachingClusteredClient.java   |  18 +-
 .../druid/client/CachingClusteredClientTest.java   | 238 ++--
 .../IndexerSQLMetadataStorageCoordinatorTest.java  |   4 +-
 .../appenderator/SegmentPublisherHelperTest.java   |  40 +++-
 .../coordinator/duty/CompactSegmentsTest.java  |   1 +
 website/.spelling  |   2 +
 59 files changed, 1311 insertions(+), 391 deletions(-)
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitionFunction.java
 create mode 100644 
core/src/main/java/org/apache/druid/timeline/partition/HashPartitioner.java
 create mode 100644 
indexing-hadoop/src/test/java/org/apache/druid/indexer/HadoopDruidDetermineConfigurationJobTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: add vectorizeVirtualColumns query context parameter (#10432)

2020-09-28 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 1d6cb62  add vectorizeVirtualColumns query context parameter (#10432)
1d6cb62 is described below

commit 1d6cb624f4a455f45f41ef4b773cf21859a09ef4
Author: Clint Wylie 
AuthorDate: Mon Sep 28 18:48:34 2020 -0700

add vectorizeVirtualColumns query context parameter (#10432)

* add vectorizeVirtualColumns query context parameter

* oops

* spelling

* default to false, more docs

* fix test

* fix spelling
---
 .../benchmark/FilteredAggregatorBenchmark.java |   8 +-
 .../apache/druid/benchmark/query/SqlBenchmark.java |  11 +-
 .../benchmark/query/SqlExpressionBenchmark.java|   6 +-
 docs/misc/math-expr.md |  15 +-
 docs/querying/query-context.md |   3 +-
 .../java/org/apache/druid/query/QueryContexts.java |  12 +
 .../druid/query/groupby/GroupByQueryConfig.java|   4 +-
 .../epinephelinae/vector/VectorGroupByEngine.java  |   2 +
 .../query/timeseries/TimeseriesQueryEngine.java|   6 +-
 .../segment/QueryableIndexStorageAdapter.java  |   2 +-
 .../org/apache/druid/segment/VirtualColumns.java   |  12 +-
 .../mean/DoubleMeanAggregationTest.java|   2 +-
 .../query/groupby/GroupByQueryRunnerTest.java  |   3 +-
 .../timeseries/TimeseriesQueryRunnerTest.java  |   6 +-
 .../virtual/AlwaysTwoCounterAggregatorFactory.java |   4 +-
 .../virtual/AlwaysTwoVectorizedVirtualColumn.java  |  18 +-
 .../virtual/VectorizedVirtualColumnTest.java   | 302 -
 .../druid/sql/calcite/BaseCalciteQueryTest.java|   5 +-
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  12 +-
 .../calcite/SqlVectorizedExpressionSanityTest.java |  11 +-
 website/.spelling  |   1 +
 21 files changed, 410 insertions(+), 35 deletions(-)

diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
 
b/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
index 47b5317..560148b 100644
--- 
a/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
+++ 
b/benchmarks/src/test/java/org/apache/druid/benchmark/FilteredAggregatorBenchmark.java
@@ -32,6 +32,7 @@ import org.apache.druid.java.util.common.logger.Logger;
 import org.apache.druid.query.Druids;
 import org.apache.druid.query.FinalizeResultsQueryRunner;
 import org.apache.druid.query.Query;
+import org.apache.druid.query.QueryContexts;
 import org.apache.druid.query.QueryPlus;
 import org.apache.druid.query.QueryRunner;
 import org.apache.druid.query.QueryRunnerFactory;
@@ -239,7 +240,12 @@ public class FilteredAggregatorBenchmark
 );
 
 final QueryPlus queryToRun = QueryPlus.wrap(
-query.withOverriddenContext(ImmutableMap.of("vectorize", vectorize))
+query.withOverriddenContext(
+ImmutableMap.of(
+QueryContexts.VECTORIZE_KEY, vectorize,
+QueryContexts.VECTORIZE_VIRTUAL_COLUMNS_KEY, vectorize
+)
+)
 );
 Sequence queryResult = theRunner.run(queryToRun, 
ResponseContext.createEmpty());
 return queryResult.toList();
diff --git 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
index 55dc74c..38b5c3a 100644
--- 
a/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
+++ 
b/benchmarks/src/test/java/org/apache/druid/benchmark/query/SqlBenchmark.java
@@ -27,6 +27,7 @@ import 
org.apache.druid.java.util.common.granularity.Granularities;
 import org.apache.druid.java.util.common.guava.Sequence;
 import org.apache.druid.java.util.common.io.Closer;
 import org.apache.druid.java.util.common.logger.Logger;
+import org.apache.druid.query.QueryContexts;
 import org.apache.druid.query.QueryRunnerFactoryConglomerate;
 import org.apache.druid.segment.QueryableIndex;
 import org.apache.druid.segment.generator.GeneratorBasicSchemas;
@@ -434,7 +435,10 @@ public class SqlBenchmark
   @OutputTimeUnit(TimeUnit.MILLISECONDS)
   public void querySql(Blackhole blackhole) throws Exception
   {
-final Map context = ImmutableMap.of("vectorize", 
vectorize);
+final Map context = ImmutableMap.of(
+QueryContexts.VECTORIZE_KEY, vectorize,
+QueryContexts.VECTORIZE_VIRTUAL_COLUMNS_KEY, vectorize
+);
 final AuthenticationResult authenticationResult = 
NoopEscalator.getInstance()

.createEscalatedAuthenticationResult();
 try (final DruidPlanner planner = plannerFactory.createPlanner(context, 
ImmutableList.of(), authenticationResult)) {
@@ -450,7 +45

[druid] branch master updated: Adding task slot count metrics to Druid Overlord (#10379)

2020-09-28 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 8168e14  Adding task slot count metrics to Druid Overlord (#10379)
8168e14 is described below

commit 8168e14e9224c9459efda07b038269815975cf50
Author: Mainak Ghosh 
AuthorDate: Mon Sep 28 23:50:38 2020 -0700

Adding task slot count metrics to Druid Overlord (#10379)

* Adding more worker metrics to Druid Overlord

* Changing the nomenclature from worker to peon as that represents the 
metrics that we want to monitor better

* Few more instance of worker usage replaced with peon

* Modifying the peon idle count logic to only use eligible workers 
available capacity

* Changing the naming to task slot count instead of peon

* Adding some unit test coverage for the new test runner apis

* Addressing Review Comments

* Modifying the TaskSlotCountStatsProvider apis so that overlords which are 
not leader do not emit these metrics

* Fixing the spelling issue in the docs

* Setting the annotation Nullable on the TaskSlotCountStatsProvider methods
---
 docs/operations/metrics.md |  5 ++
 .../main/resources/defaultMetricDimensions.json|  6 ++
 .../druid/indexing/overlord/ForkingTaskRunner.java | 33 +
 .../apache/druid/indexing/overlord/PortFinder.java |  5 ++
 .../druid/indexing/overlord/RemoteTaskRunner.java  | 80 +---
 .../overlord/SingleTaskBackgroundRunner.java   | 30 
 .../apache/druid/indexing/overlord/TaskMaster.java | 64 +++-
 .../apache/druid/indexing/overlord/TaskRunner.java | 13 
 .../indexing/overlord/ThreadingTaskRunner.java | 32 
 .../overlord/hrtr/HttpRemoteTaskRunner.java| 61 +++
 .../indexing/common/task/IngestionTestBase.java| 30 
 .../indexing/overlord/RemoteTaskRunnerTest.java| 25 ++-
 .../druid/indexing/overlord/TestTaskRunner.java| 30 
 .../overlord/hrtr/HttpRemoteTaskRunnerTest.java| 30 
 .../druid/indexing/overlord/http/OverlordTest.java | 30 
 .../server/metrics/TaskSlotCountStatsMonitor.java  | 57 ++
 .../server/metrics/TaskSlotCountStatsProvider.java | 55 ++
 .../metrics/TaskSlotCountStatsMonitorTest.java | 86 ++
 .../java/org/apache/druid/cli/CliOverlord.java |  2 +
 website/.spelling  |  1 +
 20 files changed, 662 insertions(+), 13 deletions(-)

diff --git a/docs/operations/metrics.md b/docs/operations/metrics.md
index 62b6f57..1b4ed7f 100644
--- a/docs/operations/metrics.md
+++ b/docs/operations/metrics.md
@@ -196,6 +196,11 @@ Note: If the JVM does not support CPU time measurement for 
the current thread, i
 |`task/running/count`|Number of current running tasks. This metric is only 
available if the TaskCountStatsMonitor module is included.|dataSource.|Varies.|
 |`task/pending/count`|Number of current pending tasks. This metric is only 
available if the TaskCountStatsMonitor module is included.|dataSource.|Varies.|
 |`task/waiting/count`|Number of current waiting tasks. This metric is only 
available if the TaskCountStatsMonitor module is included.|dataSource.|Varies.|
+|`taskSlot/total/count`|Number of total task slots per emission period. This 
metric is only available if the TaskSlotCountStatsMonitor module is included.| 
|Varies.|
+|`taskSlot/idle/count`|Number of idle task slots per emission period. This 
metric is only available if the TaskSlotCountStatsMonitor module is included.| 
|Varies.|
+|`taskSlot/used/count`|Number of busy task slots per emission period. This 
metric is only available if the TaskSlotCountStatsMonitor module is included.| 
|Varies.|
+|`taskSlot/lazy/count`|Number of total task slots in lazy marked 
MiddleManagers and Indexers per emission period. This metric is only available 
if the TaskSlotCountStatsMonitor module is included.| |Varies.|
+|`taskSlot/blacklisted/count`|Number of total task slots in blacklisted 
MiddleManagers and Indexers per emission period. This metric is only available 
if the TaskSlotCountStatsMonitor module is included.| |Varies.|
 
 ## Coordination
 
diff --git 
a/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
 
b/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
index 859a9c6..1a62d70 100644
--- 
a/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
+++ 
b/extensions-contrib/statsd-emitter/src/main/resources/defaultMetricDimensions.json
@@ -62,6 +62,12 @@
   "task/pending/count" : { "dimensions" : ["dataSource"], "type" : "count" },
   "task/waiting/count" : { "dimensions" : ["da

[druid] branch 0.20.0 created (now 8168e14)

2020-09-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


  at 8168e14  Adding task slot count metrics to Druid Overlord (#10379)

No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: vectorize constant expressions with optimized selectors (#10440)

2020-09-29 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 753bce3  vectorize constant expressions with optimized selectors 
(#10440)
753bce3 is described below

commit 753bce324bdf8c7c5b2b602f89c720749bfa6e22
Author: Clint Wylie 
AuthorDate: Tue Sep 29 13:19:06 2020 -0700

vectorize constant expressions with optimized selectors (#10440)
---
 .../segment/vector/ConstantVectorSelectors.java| 172 +
 .../druid/segment/virtual/ExpressionPlan.java  |   5 +
 .../segment/virtual/ExpressionVectorSelectors.java |  34 
 .../segment/virtual/ExpressionVirtualColumn.java   |  24 ++-
 .../virtual/ExpressionVectorSelectorsTest.java |  97 +++-
 .../calcite/SqlVectorizedExpressionSanityTest.java |   1 +
 6 files changed, 297 insertions(+), 36 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
new file mode 100644
index 000..c1e3c3b
--- /dev/null
+++ 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.vector;
+
+import org.apache.druid.segment.IdLookup;
+
+import javax.annotation.Nullable;
+import java.util.Arrays;
+
+public class ConstantVectorSelectors
+{
+  public static VectorValueSelector vectorValueSelector(VectorSizeInspector 
inspector, @Nullable Number constant)
+  {
+if (constant == null) {
+  return NilVectorSelector.create(inspector);
+}
+final long[] longVector = new long[inspector.getMaxVectorSize()];
+final float[] floatVector = new float[inspector.getMaxVectorSize()];
+final double[] doubleVector = new double[inspector.getMaxVectorSize()];
+Arrays.fill(longVector, constant.longValue());
+Arrays.fill(floatVector, constant.floatValue());
+Arrays.fill(doubleVector, constant.doubleValue());
+return new VectorValueSelector()
+{
+  @Override
+  public long[] getLongVector()
+  {
+return longVector;
+  }
+
+  @Override
+  public float[] getFloatVector()
+  {
+return floatVector;
+  }
+
+  @Override
+  public double[] getDoubleVector()
+  {
+return doubleVector;
+  }
+
+  @Nullable
+  @Override
+  public boolean[] getNullVector()
+  {
+return null;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static VectorObjectSelector vectorObjectSelector(
+  VectorSizeInspector inspector,
+  @Nullable Object object
+  )
+  {
+if (object == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final Object[] objects = new Object[inspector.getMaxVectorSize()];
+Arrays.fill(objects, object);
+
+return new VectorObjectSelector()
+{
+  @Override
+  public Object[] getObjectVector()
+  {
+return objects;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static SingleValueDimensionVectorSelector 
singleValueDimensionVectorSelector(
+  VectorSizeInspector inspector,
+  @Nullable String value
+  )
+  {
+if (value == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final int[] row = new int[inspector.getMaxVectorSize()];
+return new SingleValueDimensionVectorSelector()
+{
+  @Override
+  public int[] getRowVector()
+  {
+return row;
+  }
+
+  @Override
+  public int getValueCardinality()
+ 

[druid] branch 0.20.0 updated: vectorize constant expressions with optimized selectors (#10440) (#10457)

2020-09-30 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 51a4b1c  vectorize constant expressions with optimized selectors 
(#10440) (#10457)
51a4b1c is described below

commit 51a4b1cde69a8eb6fa523aba7c7e82042ba89254
Author: Clint Wylie 
AuthorDate: Wed Sep 30 16:58:01 2020 -0700

vectorize constant expressions with optimized selectors (#10440) (#10457)
---
 .../segment/vector/ConstantVectorSelectors.java| 172 +
 .../druid/segment/virtual/ExpressionPlan.java  |   5 +
 .../segment/virtual/ExpressionVectorSelectors.java |  34 
 .../segment/virtual/ExpressionVirtualColumn.java   |  24 ++-
 .../virtual/ExpressionVectorSelectorsTest.java |  97 +++-
 .../calcite/SqlVectorizedExpressionSanityTest.java |   1 +
 6 files changed, 297 insertions(+), 36 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
new file mode 100644
index 000..c1e3c3b
--- /dev/null
+++ 
b/processing/src/main/java/org/apache/druid/segment/vector/ConstantVectorSelectors.java
@@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment.vector;
+
+import org.apache.druid.segment.IdLookup;
+
+import javax.annotation.Nullable;
+import java.util.Arrays;
+
+public class ConstantVectorSelectors
+{
+  public static VectorValueSelector vectorValueSelector(VectorSizeInspector 
inspector, @Nullable Number constant)
+  {
+if (constant == null) {
+  return NilVectorSelector.create(inspector);
+}
+final long[] longVector = new long[inspector.getMaxVectorSize()];
+final float[] floatVector = new float[inspector.getMaxVectorSize()];
+final double[] doubleVector = new double[inspector.getMaxVectorSize()];
+Arrays.fill(longVector, constant.longValue());
+Arrays.fill(floatVector, constant.floatValue());
+Arrays.fill(doubleVector, constant.doubleValue());
+return new VectorValueSelector()
+{
+  @Override
+  public long[] getLongVector()
+  {
+return longVector;
+  }
+
+  @Override
+  public float[] getFloatVector()
+  {
+return floatVector;
+  }
+
+  @Override
+  public double[] getDoubleVector()
+  {
+return doubleVector;
+  }
+
+  @Nullable
+  @Override
+  public boolean[] getNullVector()
+  {
+return null;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static VectorObjectSelector vectorObjectSelector(
+  VectorSizeInspector inspector,
+  @Nullable Object object
+  )
+  {
+if (object == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final Object[] objects = new Object[inspector.getMaxVectorSize()];
+Arrays.fill(objects, object);
+
+return new VectorObjectSelector()
+{
+  @Override
+  public Object[] getObjectVector()
+  {
+return objects;
+  }
+
+  @Override
+  public int getMaxVectorSize()
+  {
+return inspector.getMaxVectorSize();
+  }
+
+  @Override
+  public int getCurrentVectorSize()
+  {
+return inspector.getCurrentVectorSize();
+  }
+};
+  }
+
+  public static SingleValueDimensionVectorSelector 
singleValueDimensionVectorSelector(
+  VectorSizeInspector inspector,
+  @Nullable String value
+  )
+  {
+if (value == null) {
+  return NilVectorSelector.create(inspector);
+}
+
+final int[] row = new int[inspector.getMaxVectorSize()];
+return new SingleValueDimensionVectorSelector()
+{
+  @Override
+  public int[] getRowVector()
+  {
+return row;
+  }
+
+  @Override
+  public int

[druid] branch master updated: fix array types from escaping into wider query engine (#10460)

2020-10-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 9ec5c08  fix array types from escaping into wider query engine (#10460)
9ec5c08 is described below

commit 9ec5c08e2a3c1210fefc78e26fbafe75702c7c2f
Author: Clint Wylie 
AuthorDate: Sat Oct 3 15:30:34 2020 -0700

fix array types from escaping into wider query engine (#10460)

* fix array types from escaping into wider query engine

* oops

* adjust

* fix lgtm
---
 .../apache/druid/math/expr/BinaryOperatorExpr.java |   8 +-
 .../org/apache/druid/math/expr/ConstantExpr.java   |  11 ++
 .../main/java/org/apache/druid/math/expr/Expr.java |   6 +
 .../java/org/apache/druid/math/expr/ExprType.java  | 104 --
 .../apache/druid/math/expr/ExprTypeConversion.java | 159 +
 .../java/org/apache/druid/math/expr/Function.java  |  33 +++--
 .../org/apache/druid/math/expr/OutputTypeTest.java | 142 ++
 .../druid/segment/virtual/ExpressionPlanner.java   |  12 +-
 .../segment/virtual/ExpressionVirtualColumn.java   |  10 +-
 .../druid/sql/calcite/expression/Expressions.java  |  17 ---
 .../builtin/ReductionOperatorConversionHelper.java |   3 +-
 .../apache/druid/sql/calcite/planner/Calcites.java |   9 +-
 .../apache/druid/sql/calcite/rel/QueryMaker.java   |  16 ++-
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  30 +++-
 14 files changed, 354 insertions(+), 206 deletions(-)

diff --git 
a/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java 
b/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java
index 128a780..20ecc5d 100644
--- a/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java
+++ b/core/src/main/java/org/apache/druid/math/expr/BinaryOperatorExpr.java
@@ -83,7 +83,13 @@ abstract class BinaryOpExprBase implements Expr
   @Override
   public ExprType getOutputType(InputBindingTypes inputTypes)
   {
-return ExprType.operatorAutoTypeConversion(left.getOutputType(inputTypes), 
right.getOutputType(inputTypes));
+if (left.isNullLiteral()) {
+  return right.getOutputType(inputTypes);
+}
+if (right.isNullLiteral()) {
+  return left.getOutputType(inputTypes);
+}
+return ExprTypeConversion.operator(left.getOutputType(inputTypes), 
right.getOutputType(inputTypes));
   }
 
   @Override
diff --git a/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java 
b/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java
index 279600d..57ae900 100644
--- a/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java
+++ b/core/src/main/java/org/apache/druid/math/expr/ConstantExpr.java
@@ -99,6 +99,11 @@ abstract class NullNumericConstantExpr extends ConstantExpr
   }
 
 
+  @Override
+  public boolean isNullLiteral()
+  {
+return true;
+  }
 }
 
 class LongExpr extends ConstantExpr
@@ -429,6 +434,12 @@ class StringExpr extends ConstantExpr
   }
 
   @Override
+  public boolean isNullLiteral()
+  {
+return value == null;
+  }
+
+  @Override
   public String toString()
   {
 return value;
diff --git a/core/src/main/java/org/apache/druid/math/expr/Expr.java 
b/core/src/main/java/org/apache/druid/math/expr/Expr.java
index be0a32e..ff646fe 100644
--- a/core/src/main/java/org/apache/druid/math/expr/Expr.java
+++ b/core/src/main/java/org/apache/druid/math/expr/Expr.java
@@ -53,6 +53,12 @@ public interface Expr
 return false;
   }
 
+  default boolean isNullLiteral()
+  {
+// Overridden by things that are null literals.
+return false;
+  }
+
   /**
* Returns the value of expr if expr is a literal, or throws an exception 
otherwise.
*
diff --git a/core/src/main/java/org/apache/druid/math/expr/ExprType.java 
b/core/src/main/java/org/apache/druid/math/expr/ExprType.java
index e11b8ace..ebdf64a 100644
--- a/core/src/main/java/org/apache/druid/math/expr/ExprType.java
+++ b/core/src/main/java/org/apache/druid/math/expr/ExprType.java
@@ -19,7 +19,6 @@
 
 package org.apache.druid.math.expr;
 
-import org.apache.druid.java.util.common.IAE;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.segment.column.ValueType;
 
@@ -169,107 +168,4 @@ public enum ExprType
 return elementType;
   }
 
-  /**
-   * Given 2 'input' types, choose the most appropriate combined type, if 
possible
-   *
-   * arrays must be the same type
-   * if both types are {@link #STRING}, the output type will be preserved as 
string
-   * if both types are {@link #LONG}, the output type will be preserved as long
-   *
-   */
-  @Nullable
-  public static ExprType operatorAutoTypeConversion(@Nullable ExprType type, 
@Nullable ExprType other)
-  {
-if (type == null || other == null) {
-  // cannot auto conversion unknown types
-  return null;
-}
-// arrays cannot be auto

[druid] branch master updated: Update version to 0.21.0-SNAPSHOT (#10450)

2020-10-03 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 65c0d64  Update version to 0.21.0-SNAPSHOT (#10450)
65c0d64 is described below

commit 65c0d64676080ae618c76ecd08fb4dfb9decc679
Author: Jonathan Wei 
AuthorDate: Sat Oct 3 16:08:34 2020 -0700

Update version to 0.21.0-SNAPSHOT (#10450)

* [maven-release-plugin] prepare release druid-0.21.0

* [maven-release-plugin] prepare for next development iteration

* Update web-console versions
---
 benchmarks/pom.xml   |  2 +-
 cloud/aws-common/pom.xml |  2 +-
 cloud/gcp-common/pom.xml |  2 +-
 core/pom.xml |  6 ++
 distribution/pom.xml |  9 -
 extendedset/pom.xml  |  5 ++---
 extensions-contrib/aliyun-oss-extensions/pom.xml |  5 ++---
 extensions-contrib/ambari-metrics-emitter/pom.xml|  5 ++---
 extensions-contrib/cassandra-storage/pom.xml |  2 +-
 extensions-contrib/cloudfiles-extensions/pom.xml |  5 ++---
 extensions-contrib/distinctcount/pom.xml |  5 ++---
 extensions-contrib/dropwizard-emitter/pom.xml|  5 ++---
 extensions-contrib/gce-extensions/pom.xml|  5 ++---
 extensions-contrib/graphite-emitter/pom.xml  |  5 ++---
 extensions-contrib/influx-extensions/pom.xml |  5 ++---
 extensions-contrib/influxdb-emitter/pom.xml  |  6 ++
 extensions-contrib/kafka-emitter/pom.xml |  5 ++---
 extensions-contrib/materialized-view-maintenance/pom.xml |  6 ++
 extensions-contrib/materialized-view-selection/pom.xml   |  6 ++
 extensions-contrib/momentsketch/pom.xml  |  6 ++
 extensions-contrib/moving-average-query/pom.xml  |  5 ++---
 extensions-contrib/opentsdb-emitter/pom.xml  |  6 ++
 extensions-contrib/redis-cache/pom.xml   |  5 ++---
 extensions-contrib/sqlserver-metadata-storage/pom.xml|  2 +-
 extensions-contrib/statsd-emitter/pom.xml|  6 ++
 extensions-contrib/tdigestsketch/pom.xml |  6 ++
 extensions-contrib/thrift-extensions/pom.xml |  6 ++
 extensions-contrib/time-min-max/pom.xml  |  6 ++
 extensions-contrib/virtual-columns/pom.xml   |  2 +-
 extensions-core/avro-extensions/pom.xml  |  5 ++---
 extensions-core/azure-extensions/pom.xml |  5 ++---
 extensions-core/datasketches/pom.xml |  5 ++---
 extensions-core/druid-basic-security/pom.xml |  6 ++
 extensions-core/druid-bloom-filter/pom.xml   |  5 ++---
 extensions-core/druid-kerberos/pom.xml   |  5 ++---
 extensions-core/druid-pac4j/pom.xml  |  2 +-
 extensions-core/druid-ranger-security/pom.xml|  6 ++
 extensions-core/ec2-extensions/pom.xml   |  5 ++---
 extensions-core/google-extensions/pom.xml|  2 +-
 extensions-core/hdfs-storage/pom.xml |  2 +-
 extensions-core/histogram/pom.xml|  2 +-
 extensions-core/kafka-extraction-namespace/pom.xml   |  5 ++---
 extensions-core/kafka-indexing-service/pom.xml   |  2 +-
 extensions-core/kinesis-indexing-service/pom.xml |  5 ++---
 extensions-core/lookups-cached-global/pom.xml|  5 ++---
 extensions-core/lookups-cached-single/pom.xml|  5 ++---
 extensions-core/mysql-metadata-storage/pom.xml   |  2 +-
 extensions-core/orc-extensions/pom.xml   |  6 ++
 extensions-core/parquet-extensions/pom.xml   |  6 ++
 extensions-core/postgresql-metadata-storage/pom.xml  |  2 +-
 extensions-core/protobuf-extensions/pom.xml  |  6 ++
 extensions-core/s3-extensions/pom.xml|  5 ++---
 extensions-core/simple-client-sslcontext/pom.xml |  6 ++
 extensions-core/stats/pom.xml|  2 +-
 hll/pom.xml  |  2 +-
 indexing-hadoop/pom.xml  |  2 +-
 indexing-service/pom.xml |  2 +-
 integration-tests/pom.xml|  6 +++---
 pom.xml  | 13 ++---
 processing/pom.xml   |  2 +-
 server/pom.xml   |  2 +-
 services/pom.xml |  4 ++--
 sql/pom.xml  |  5 ++---
 web

[druid] branch 0.20.0 updated (239e9f0 -> e174586)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 239e9f0  fix array types from escaping into wider query engine 
(#10460) (#10474)
 add e174586  Web console: switch to switches instead of checkboxes 
(#10454) (#10470)

No new revisions were added by this update.

Summary of changes:
 .../__snapshots__/menu-checkbox.spec.tsx.snap  | 73 +++---
 .../components/menu-checkbox/menu-checkbox.scss| 26 
 .../menu-checkbox/menu-checkbox.spec.tsx   | 12 +++-
 .../src/components/menu-checkbox/menu-checkbox.tsx | 23 +--
 .../show-log/__snapshots__/show-log.spec.tsx.snap  |  2 +-
 web-console/src/components/show-log/show-log.scss  | 13 ++--
 web-console/src/components/show-log/show-log.tsx   |  6 +-
 .../table-column-selector.tsx  |  2 +-
 .../__snapshots__/warning-checklist.spec.tsx.snap  |  6 +-
 .../warning-checklist/warning-checklist.tsx|  8 +--
 .../src/views/load-data-view/load-data-view.tsx|  5 +-
 .../src/views/query-view/run-button/run-button.tsx | 34 +-
 12 files changed, 131 insertions(+), 79 deletions(-)
 delete mode 100644 web-console/src/components/menu-checkbox/menu-checkbox.scss


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: Fix the task id creation in CompactionTask (#10445) (#10472)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 38f392d  Fix the task id creation in CompactionTask (#10445) (#10472)
38f392d is described below

commit 38f392d2de5937b8a54be5f9ff9faa85d03b981e
Author: Jonathan Wei 
AuthorDate: Sun Oct 4 13:57:02 2020 -0700

Fix the task id creation in CompactionTask (#10445) (#10472)

* Fix the task id creation in CompactionTask

* review comments

* Ignore test for range partitioning and segment lock

Co-authored-by: Abhishek Agarwal 
<1477457+abhishekagarwa...@users.noreply.github.com>
---
 .../druid/indexing/common/task/CompactionTask.java | 13 +++--
 .../parallel/ParallelIndexSupervisorTask.java  | 21 ---
 .../common/task/CompactionTaskParallelRunTest.java | 46 +++
 .../parallel/ParallelIndexSupervisorTaskTest.java  | 67 ++
 4 files changed, 126 insertions(+), 21 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
index 62a9f26..ba2502f 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/CompactionTask.java
@@ -32,6 +32,7 @@ import com.google.common.collect.Lists;
 import org.apache.curator.shaded.com.google.common.base.Verify;
 import org.apache.druid.client.coordinator.CoordinatorClient;
 import org.apache.druid.client.indexing.ClientCompactionTaskQuery;
+import org.apache.druid.data.input.InputSource;
 import org.apache.druid.data.input.impl.DimensionSchema;
 import org.apache.druid.data.input.impl.DimensionSchema.MultiValueHandling;
 import org.apache.druid.data.input.impl.DimensionsSpec;
@@ -361,10 +362,14 @@ public class CompactionTask extends AbstractBatchIndexTask
   // a new Appenderator on its own instead. As a result, they should 
use different sequence names to allocate
   // new segmentIds properly. See 
IndexerSQLMetadataStorageCoordinator.allocatePendingSegments() for details.
   // In this case, we use different fake IDs for each created index 
task.
-  final String subtaskId = tuningConfig == null || 
tuningConfig.getMaxNumConcurrentSubTasks() == 1
-   ? createIndexTaskSpecId(i)
-   : getId();
-  return newTask(subtaskId, ingestionSpecs.get(i));
+  ParallelIndexIngestionSpec ingestionSpec = ingestionSpecs.get(i);
+  InputSource inputSource = 
ingestionSpec.getIOConfig().getNonNullInputSource(
+  ingestionSpec.getDataSchema().getParser()
+  );
+  final String subtaskId = 
ParallelIndexSupervisorTask.isParallelMode(inputSource, tuningConfig)
+   ? getId()
+   : createIndexTaskSpecId(i);
+  return newTask(subtaskId, ingestionSpec);
 })
 .collect(Collectors.toList());
 
diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
index dd0e759..4a218a0 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
@@ -466,18 +466,25 @@ public class ParallelIndexSupervisorTask extends 
AbstractBatchIndexTask implemen
 registerResourceCloserOnAbnormalExit(currentSubTaskHolder);
   }
 
-  private boolean isParallelMode()
+  public static boolean isParallelMode(InputSource inputSource, @Nullable 
ParallelIndexTuningConfig tuningConfig)
   {
+if (null == tuningConfig) {
+  return false;
+}
+boolean useRangePartitions = useRangePartitions(tuningConfig);
 // Range partitioning is not implemented for runSequential() (but hash 
partitioning is)
-int minRequiredNumConcurrentSubTasks = useRangePartitions() ? 1 : 2;
+int minRequiredNumConcurrentSubTasks = useRangePartitions ? 1 : 2;
+return inputSource.isSplittable() && 
tuningConfig.getMaxNumConcurrentSubTasks() >= minRequiredNumConcurrentSubTasks;
+  }
 
-return baseInputSource.isSplittable()
-   && ingestionSchema.getTuningConfig().getMaxNumConcurrentSubTasks() 
>= minRequiredNumConcurrentSubTasks;
+  private static boolean useRangePartitions(ParallelIndexTuningConfig 
tuningConfig)
+  {
+return tuningConfig.getGivenOrDefaultPartitionsSpec() instanceof 
SingleDimensionParti

[druid] branch 0.20.0 updated: Web console: fix lookup edit dialog version setting (#10461) (#10473)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 795bec8  Web console: fix lookup edit dialog version setting (#10461) 
(#10473)
795bec8 is described below

commit 795bec866e29209ff8fcf49cae6a5b0110563050
Author: Jonathan Wei 
AuthorDate: Sun Oct 4 13:57:13 2020 -0700

Web console: fix lookup edit dialog version setting (#10461) (#10473)

* fix lookup edit dialog

* update snapshots

* clean up test

Co-authored-by: Vadim Ogievetsky 
---
 web-console/src/components/auto-form/auto-form.tsx |   2 +
 .../__snapshots__/form-json-selector.spec.tsx.snap |  43 ++
 .../form-json-selector.spec.tsx}   |  33 +-
 .../form-json-selector/form-json-selector.tsx} |  39 +-
 .../__snapshots__/compaction-dialog.spec.tsx.snap  |  88 +--
 .../compaction-dialog/compaction-dialog.scss   |   2 +-
 .../compaction-dialog/compaction-dialog.tsx|  25 +-
 .../__snapshots__/lookup-edit-dialog.spec.tsx.snap | 745 ++---
 .../lookup-edit-dialog/lookup-edit-dialog.scss |  16 +-
 .../lookup-edit-dialog/lookup-edit-dialog.spec.tsx |   6 +-
 .../lookup-edit-dialog/lookup-edit-dialog.tsx  | 144 ++--
 .../src/views/datasource-view/datasource-view.tsx  |   5 +-
 .../src/views/lookups-view/lookups-view.tsx|   4 +-
 13 files changed, 538 insertions(+), 614 deletions(-)

diff --git a/web-console/src/components/auto-form/auto-form.tsx 
b/web-console/src/components/auto-form/auto-form.tsx
index 59561ac..ce26cad 100644
--- a/web-console/src/components/auto-form/auto-form.tsx
+++ b/web-console/src/components/auto-form/auto-form.tsx
@@ -50,6 +50,7 @@ export interface Field {
   placeholder?: Functor;
   min?: number;
   zeroMeansUndefined?: boolean;
+  height?: string;
   disabled?: Functor;
   defined?: Functor;
   required?: Functor;
@@ -272,6 +273,7 @@ export class AutoForm> 
extends React.PureComponent
 value={deepGet(model as any, field.name)}
 onChange={(v: any) => this.fieldChange(field, v)}
 placeholder={AutoForm.evaluateFunctor(field.placeholder, model, '')}
+height={field.height}
   />
 );
   }
diff --git 
a/web-console/src/components/form-json-selector/__snapshots__/form-json-selector.spec.tsx.snap
 
b/web-console/src/components/form-json-selector/__snapshots__/form-json-selector.spec.tsx.snap
new file mode 100644
index 000..d2ec216
--- /dev/null
+++ 
b/web-console/src/components/form-json-selector/__snapshots__/form-json-selector.spec.tsx.snap
@@ -0,0 +1,43 @@
+// Jest Snapshot v1, https://goo.gl/fbAQLP
+
+exports[`FormJsonSelector matches snapshot form json 1`] = `
+
+  
+
+
+  
+
+`;
+
+exports[`FormJsonSelector matches snapshot form tab 1`] = `
+
+  
+
+
+  
+
+`;
diff --git a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss 
b/web-console/src/components/form-json-selector/form-json-selector.spec.tsx
similarity index 60%
copy from web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
copy to 
web-console/src/components/form-json-selector/form-json-selector.spec.tsx
index 7ee469b..ae7c3a9 100644
--- a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
+++ b/web-console/src/components/form-json-selector/form-json-selector.spec.tsx
@@ -16,28 +16,21 @@
  * limitations under the License.
  */
 
-.lookup-edit-dialog {
-  &.bp3-dialog {
-top: 10vh;
+import { shallow } from 'enzyme';
+import React from 'react';
 
-width: 600px;
-  }
+import { FormJsonSelector } from './form-json-selector';
 
-  .auto-form {
-margin: 5px 20px 10px;
-  }
+describe('FormJsonSelector', () => {
+  it('matches snapshot form tab', () => {
+const formJsonSelector = shallow( {}} />);
 
-  .lookup-label {
-padding: 0 20px;
-margin-top: 5px;
-margin-bottom: 5px;
-  }
+expect(formJsonSelector).toMatchSnapshot();
+  });
 
-  .ace-solarized-dark {
-background-color: #232c35;
-  }
+  it('matches snapshot form json', () => {
+const formJsonSelector = shallow( {}} />);
 
-  .ace_gutter-layer {
-background-color: #27313c;
-  }
-}
+expect(formJsonSelector).toMatchSnapshot();
+  });
+});
diff --git a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss 
b/web-console/src/components/form-json-selector/form-json-selector.tsx
similarity index 54%
copy from web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
copy to web-console/src/components/form-json-selector/form-json-selector.tsx
index 7ee469b..4999826 100644
--- a/web-console/src/dialogs/lookup-edit-dialog/lookup-edit-dialog.scss
+++ b/web-console/src/components/form-json-selector/form-json-selector.tsx
@@ -16,28 +16,25 @@
  * limitations under the Lice

[druid] branch 0.20.0 updated: Allow using jsonpath predicates with AvroFlattener (#10330) (#10475)

2020-10-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new eb6b2e6  Allow using jsonpath predicates with AvroFlattener (#10330) 
(#10475)
eb6b2e6 is described below

commit eb6b2e6d05788ce83d4a60094cc8ad528ee1137c
Author: Jonathan Wei 
AuthorDate: Sun Oct 4 15:56:07 2020 -0700

Allow using jsonpath predicates with AvroFlattener (#10330) (#10475)

Co-authored-by: Lasse Krogh Mammen 
---
 .../druid/data/input/avro/GenericAvroJsonProvider.java |  2 +-
 .../druid/data/input/avro/AvroFlattenerMakerTest.java  | 18 ++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
 
b/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
index 42195ca..ab6a53e 100644
--- 
a/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
+++ 
b/extensions-core/avro-extensions/src/main/java/org/apache/druid/data/input/avro/GenericAvroJsonProvider.java
@@ -193,6 +193,6 @@ public class GenericAvroJsonProvider implements JsonProvider
   @Override
   public Object unwrap(final Object o)
   {
-throw new UnsupportedOperationException("Unused");
+return o;
   }
 }
diff --git 
a/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
 
b/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
index d3faaf4..6becdf7 100644
--- 
a/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
+++ 
b/extensions-core/avro-extensions/src/test/java/org/apache/druid/data/input/avro/AvroFlattenerMakerTest.java
@@ -23,6 +23,8 @@ import 
org.apache.druid.data.input.AvroStreamInputRowParserTest;
 import org.apache.druid.data.input.SomeAvroDatum;
 import org.junit.Assert;
 import org.junit.Test;
+import java.util.Collections;
+import java.util.List;
 
 public class AvroFlattenerMakerTest
 {
@@ -195,6 +197,22 @@ public class AvroFlattenerMakerTest
 record.getSomeRecordArray(),
 flattener.makeJsonPathExtractor("$.someRecordArray").apply(record)
 );
+
+Assert.assertEquals(
+record.getSomeRecordArray().get(0).getNestedString(),
+
flattener.makeJsonPathExtractor("$.someRecordArray[0].nestedString").apply(record)
+);
+
+Assert.assertEquals(
+record.getSomeRecordArray(),
+
flattener.makeJsonPathExtractor("$.someRecordArray[?(@.nestedString)]").apply(record)
+);
+
+List nestedStringArray = 
Collections.singletonList(record.getSomeRecordArray().get(0).getNestedString().toString());
+Assert.assertEquals(
+nestedStringArray,
+
flattener.makeJsonPathExtractor("$.someRecordArray[?(@.nestedString=='string in 
record')].nestedString").apply(record)
+);
   }
 
   @Test(expected = UnsupportedOperationException.class)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] annotated tag druid-0.20.0-rc1 updated (2d6d036 -> 4cb964e)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to annotated tag druid-0.20.0-rc1
in repository https://gitbox.apache.org/repos/asf/druid.git.


*** WARNING: tag druid-0.20.0-rc1 was modified! ***

from 2d6d036  (commit)
  to 4cb964e  (tag)
 tagging 2d6d03688bbb1d2321baec6555887fe9317f5eb4 (commit)
 replaces druid-0.8.0-rc1
  by jon-wei
  on Mon Oct 5 19:17:57 2020 -0700

- Log -
[maven-release-plugin] copy for tag druid-0.20.0-rc1
---


No new revisions were added by this update.

Summary of changes:


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



svn commit: r41708 - /dev/druid/0.20.0-rc1/

2020-10-05 Thread jonwei
Author: jonwei
Date: Tue Oct  6 05:16:54 2020
New Revision: 41708

Log:
Add 0.20.0-rc1 artifacts

Added:
dev/druid/0.20.0-rc1/
dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz   (with props)
dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc
dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512
dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz   (with props)
dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc
dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.asc Tue Oct  6 05:16:54 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl978G8ACgkQkoPEkhrE
+SD5NzQ/+Ik3H92fOEdvMYisAoO1HrXfqD++1gh0YzgFW7Eh0kJjxT4A6tK2UgLyo
+bAE+z/t3V9+wDSc7xaaPHO+xDW9kR+YaWF8ZKbvQoXmAdQRhkEpgScR95JOFLiOs
+YJgS5V9FhRKf9P1vCAT/BZkpgHNH4GxVPYkJ0tj0043RgSLECaJ4arFhPrskdYa7
++98CvQnHbwdoPtXcsjfdk754j7SEAHq3IoVgTfzYOO81fSJX3qZ2JQIrdCZlv7wj
+So+2dO3z+28v7zzHAhomx2FCTbXtEcTGh+EN8/i0IuAX3fmVxQPCcD0oc2+/nd6y
+GPbtqSohjK4g0nNofvatvM1ifad0ZfBas0fsHOK16AF9S14vH70XbnQ+D81Is1ZC
+sT4zSdeaUmQvs+IvMpY3Tm3vWvbFa5xvAvO7t2ybsZJ4M9dwfip4Zl1A7ixzUcyr
+nk6EaiVET0F1CBQBC2nfSfWkrMCXD9vKa9yIYjofH3WrpPrnIVT3UelmR0Yk56aJ
+wKPkaZrHTrEX/AM/GnHmpr4NFWf7EX8OJvTx1wIKtTsMiCAEWicwoIvvK2ECYQPc
+pCbqrrrjdl/xAvQRcj3ScJfnjIefNRJidUHJJ9orq6JMSis1OX1PF30rRsYpywNY
+MJBdscDsm6p7aV2f5UGYTpUsrTG28OEuHfEcnoZUBzfTQMmQEBk=
+=tmv9
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-bin.tar.gz.sha512 Tue Oct  6 
05:16:54 2020
@@ -0,0 +1 @@
+a1225947af35cac6483d50694e4cfb3f8e5a97741fbe171edc6528959049f1b38f3e7d4044e0644afd9b209d890f74282a4befb40d570919fb077f6522b7737e
\ No newline at end of file

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.asc Tue Oct  6 05:16:54 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl978G8ACgkQkoPEkhrE
+SD4M8w//f0Bj75eqXymv1wDy+QFLiz/GVeCuXP/n124hJCRn8i0WRLrTWKHSQBaJ
+NjkkTM66dRiYCbczA4Bt3rqUJpvqzVTf1Ls/M8boDRudwpIBRp0ZjfNxqLp1wGdH
+kDVQAbdYlgFnrB01TM6PhvigvXJ4Fy8dHZfxGXX5mzPG9+a98/f4hAJayUplLDPa
+NVg2m/7qdLc7TWYEZ+wL7hvFb70mDUlD1je7hvMctkzP1DxiDNkIOBPUxeIFQehD
+w7fy6FilHzgBqXVa8ghciRAD2YhIPJQddMxbotKD0TsceTFnyxTvGD6eVs0CMMve
+rG2hqd4uZsPhJ+Zz54cqqDSIhC22Ve9vNclDbGBPUMnhLNKjfE8Da/q9k/5227v/
+v8i5T1hqNWMQcUezRXIQ1Q+TXdXI5mLgMJZR3XF6HwTr6WDokzRvLxPo1BXypnkz
+snZniV9C2j90kXWHll2E9YYhz76p6Ur4Gf+DjIRe0etDdxXS/Qw5DrxSZClhv//O
+Cg42ctXxcbmj9Lqa0450WvntHEjy8Uqw6gUdQO6mi/7J8lQkdVMuC6f46DyDYrSZ
+x1zzsoZYP2gIjLj7Bxpa8Wq6oy6eXhvcVDLxeVIWq8yCupYivPPPhzWmheRts7Zs
+3Aqe67J9iyS9sE4vLie7JrUeiwHET8R5QqNEnJPOLrxgYh3Kwdc=
+=g13j
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512
==
--- dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc1/apache-druid-0.20.0-src.tar.gz.sha512 Tue Oct  6 
05:16:54 2020
@@ -0,0 +1 @@
+c1718ad615d689ec0350f170bdefdbc9e7aa1628693ebb7d5e6de74475e5088b9e99383a68a627f08e434aa89e43fcae84847ec3c9869687176ff8d43a3848c6
\ No newline at end of file



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch 20docs created (now a28ee78)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20docs
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


  at a28ee78  Add 0.20.0 docs

This branch includes the following new commits:

 new a28ee78  Add 0.20.0 docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch 20docs created (now fab4b1e)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20docs
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


  at fab4b1e  Add 0.20.0 docs

This branch includes the following new commits:

 new fab4b1e  Add 0.20.0 docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch asf-staging updated (350df82 -> 6b6df73)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


from 350df82  latest 0.19.0 docs for staging
 add fab4b1e  Add 0.20.0 docs
 new 6b6df73  Merge pull request #101 from apache/20docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 blog/2011/04/30/introducing-druid.html |2 +-
 blog/2011/05/20/druid-part-deux.html   |2 +-
 blog/2012/01/19/scaling-the-druid-data-store.html  |2 +-
 ...-right-cardinality-estimation-for-big-data.html |2 +-
 blog/2012/09/21/druid-bitmap-compression.html  |2 +-
 ...ond-hadoop-fast-ad-hoc-queries-on-big-data.html |2 +-
 blog/2012/10/24/introducing-druid.html |2 +-
 .../interactive-queries-meet-real-time-data.html   |2 +-
 blog/2013/04/03/15-minutes-to-live-druid.html  |2 +-
 blog/2013/04/03/druid-r-meetup.html|2 +-
 blog/2013/04/26/meet-the-druid.html|2 +-
 blog/2013/05/10/real-time-for-real.html|2 +-
 blog/2013/08/06/twitter-tutorial.html  |2 +-
 blog/2013/08/30/loading-data.html  |2 +-
 .../12/the-art-of-approximating-distributions.html |2 +-
 blog/2013/09/16/upcoming-events.html   |2 +-
 .../09/19/launching-druid-with-apache-whirr.html   |2 +-
 blog/2013/09/20/druid-at-xldb.html |2 +-
 blog/2013/11/04/querying-your-data.html|2 +-
 blog/2014/02/03/rdruid-and-twitterstream.html  |2 +-
 ...oglog-optimizations-for-real-world-systems.html |2 +-
 blog/2014/03/12/batch-ingestion.html   |2 +-
 blog/2014/03/17/benchmarking-druid.html|2 +-
 blog/2014/04/15/intro-to-pydruid.html  |2 +-
 ...ff-on-the-rise-of-the-real-time-data-stack.html |2 +-
 .../07/23/five-tips-for-a-f-ing-great-logo.html|2 +-
 blog/2015/02/20/towards-a-community-led-druid.html |2 +-
 blog/2015/11/03/seeking-new-committers.html|2 +-
 blog/2016/01/06/announcing-new-committers.html |2 +-
 blog/2016/06/28/druid-0-9-1.html   |2 +-
 blog/2016/12/01/druid-0-9-2.html   |2 +-
 blog/2017/04/18/druid-0-10-0.html  |2 +-
 blog/2017/08/22/druid-0-10-1.html  |2 +-
 blog/2017/12/04/druid-0-11-0.html  |2 +-
 blog/2018/03/08/druid-0-12-0.html  |2 +-
 blog/2018/06/08/druid-0-12-1.html  |2 +-
 blog/index.html|2 +-
 community/cla.html |2 +-
 community/index.html   |2 +-
 css/base.css   |  280 ---
 css/blogs.css  |   68 -
 css/bootstrap-pure.css | 1855 -
 css/docs.css   |  126 --
 css/footer.css |   29 -
 css/header.css |  110 -
 css/index.css  |   50 -
 css/news-list.css  |   63 -
 css/reset.css  |   44 -
 css/syntax.css |  281 ---
 css/variables.css  |0
 .../comparisons/druid-vs-elasticsearch.html|4 +-
 .../comparisons/druid-vs-key-value.html|4 +-
 .../comparisons/druid-vs-kudu.html |4 +-
 .../comparisons/druid-vs-redshift.html |4 +-
 .../comparisons/druid-vs-spark.html|4 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|4 +-
 docs/0.13.0-incubating/configuration/index.html|4 +-
 docs/0.13.0-incubating/configuration/logging.html  |4 +-
 docs/0.13.0-incubating/configuration/realtime.html |4 +-
 .../dependencies/cassandra-deep-storage.html   |4 +-
 .../dependencies/deep-storage.html |4 +-
 .../dependencies/metadata-storage.html |4 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |4 +-
 docs/0.13.0-incubating/design/auth.html|4 +-
 docs/0.13.0-incubating/design/broker.html  |4 +-
 docs/0.13.0-incubating/design/coordinator.html |4 +-
 docs/0.13.0-incubating/design/historical.html  |4 +-
 docs/0.13.0-incubating/design/index.html   |4 +-
 .../0.13.0-incubating/design/indexing-service.html |4 +-
 docs/0.13.0-incubating/design/middlemanager.html   |4 +-
 docs/0.13.0-incubating/design/overlord.html   

[druid-website-src] 01/01: Merge pull request #173 from apache/20docs

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 504c40c7a84a4d6507e5effa0cfc448b8ecb3119
Merge: c32ee24 a28ee78
Author: Jonathan Wei 
AuthorDate: Mon Oct 5 22:40:53 2020 -0700

Merge pull request #173 from apache/20docs

Add 0.20.0 docs

 docs/0.20.0/About-Experimental-Features.html   |   8 +
 docs/0.20.0/Aggregations.html  |   8 +
 docs/0.20.0/ApproxHisto.html   |   8 +
 docs/0.20.0/Batch-ingestion.html   |   8 +
 docs/0.20.0/Booting-a-production-cluster.html  |   8 +
 docs/0.20.0/Broker-Config.html |   8 +
 docs/0.20.0/Broker.html|   8 +
 docs/0.20.0/Build-from-source.html |   8 +
 docs/0.20.0/Cassandra-Deep-Storage.html|   8 +
 docs/0.20.0/Cluster-setup.html |   8 +
 docs/0.20.0/Compute.html   |   8 +
 docs/0.20.0/Concepts-and-Terminology.html  |   8 +
 docs/0.20.0/Configuration.html |   8 +
 docs/0.20.0/Contribute.html|   8 +
 docs/0.20.0/Coordinator-Config.html|   8 +
 docs/0.20.0/Coordinator.html   |   8 +
 docs/0.20.0/DataSource.html|   8 +
 docs/0.20.0/DataSourceMetadataQuery.html   |   8 +
 docs/0.20.0/Data_formats.html  |   8 +
 docs/0.20.0/Deep-Storage.html  |   8 +
 docs/0.20.0/Design.html|   8 +
 docs/0.20.0/DimensionSpecs.html|   8 +
 docs/0.20.0/Download.html  |   8 +
 docs/0.20.0/Druid-Personal-Demo-Cluster.html   |   8 +
 docs/0.20.0/Druid-vs-Cassandra.html|   8 +
 docs/0.20.0/Druid-vs-Elasticsearch.html|   8 +
 docs/0.20.0/Druid-vs-Hadoop.html   |   8 +
 docs/0.20.0/Druid-vs-Impala-or-Shark.html  |   8 +
 docs/0.20.0/Druid-vs-Redshift.html |   8 +
 docs/0.20.0/Druid-vs-Spark.html|   8 +
 docs/0.20.0/Druid-vs-Vertica.html  |   8 +
 docs/0.20.0/Evaluate.html  |   8 +
 docs/0.20.0/Examples.html  |   8 +
 docs/0.20.0/Filters.html   |   8 +
 docs/0.20.0/Firehose.html  |   8 +
 docs/0.20.0/GeographicQueries.html |   8 +
 docs/0.20.0/Granularities.html |   8 +
 docs/0.20.0/GroupByQuery.html  |   8 +
 docs/0.20.0/Hadoop-Configuration.html  |   8 +
 docs/0.20.0/Having.html|   8 +
 docs/0.20.0/Historical-Config.html |   8 +
 docs/0.20.0/Historical.html|   8 +
 docs/0.20.0/Home.html  |   8 +
 docs/0.20.0/Including-Extensions.html  |   8 +
 docs/0.20.0/Indexing-Service-Config.html   |   8 +
 docs/0.20.0/Indexing-Service.html  |   8 +
 docs/0.20.0/Ingestion-FAQ.html |   8 +
 docs/0.20.0/Ingestion-overview.html|   8 +
 docs/0.20.0/Ingestion.html |   8 +
 .../Integrating-Druid-With-Other-Technologies.html |   8 +
 docs/0.20.0/Kafka-Eight.html   |   8 +
 docs/0.20.0/Libraries.html |   8 +
 docs/0.20.0/LimitSpec.html |   8 +
 docs/0.20.0/Loading-Your-Data.html |   8 +
 docs/0.20.0/Logging.html   |   8 +
 docs/0.20.0/Master.html|   8 +
 docs/0.20.0/Metadata-storage.html  |   8 +
 docs/0.20.0/Metrics.html   |   8 +
 docs/0.20.0/Middlemanager.html |   8 +
 docs/0.20.0/Modules.html   |   8 +
 docs/0.20.0/MySQL.html |   8 +
 docs/0.20.0/OrderBy.html   |   8 +
 docs/0.20.0/Other-Hadoop.html  |   8 +
 docs/0.20.0/Papers-and-talks.html  |   8 +
 docs/0.20.0/Peons.html |   8 +
 docs/0.20.0/Performance-FAQ.html   |   8 +
 docs/0.20.0/Plumber.html   |   8 +
 docs/0.20.0/Post-aggregations.html |   8 +
 docs/0.20.0/Production-Cluster-Configuration.html  |   8 +
 docs/0.20.0/Query-Context.html |   8 +
 docs/0.20.0/Querying-your-data.html|   8 +
 docs/0.20.0/Querying.html  |   8 +
 docs/0.20.0/Realtime-Config.html   |   8 +
 docs/0.20.0/Realtime-ingestion.html|   8 +
 docs/0.20.0/Realtime.html  |   8 +
 docs/0.20.0/Recommendations.html   |   8 +
 docs/0.20.0/Rolling-Updates.html

[druid-website-src] branch master updated (c32ee24 -> 504c40c)

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


from c32ee24  Merge pull request #171 from druid-matt/patch-23
 add a28ee78  Add 0.20.0 docs
 new 504c40c  Merge pull request #173 from apache/20docs

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../About-Experimental-Features.html   |   0
 docs/{latest => 0.20.0}/Aggregations.html  |   0
 docs/{latest => 0.20.0}/ApproxHisto.html   |   0
 docs/{latest => 0.20.0}/Batch-ingestion.html   |   0
 .../Booting-a-production-cluster.html  |   0
 docs/{latest => 0.20.0}/Broker-Config.html |   0
 docs/{latest => 0.20.0}/Broker.html|   0
 docs/{latest => 0.20.0}/Build-from-source.html |   0
 .../{latest => 0.20.0}/Cassandra-Deep-Storage.html |   0
 docs/{latest => 0.20.0}/Cluster-setup.html |   0
 docs/{latest => 0.20.0}/Compute.html   |   0
 .../Concepts-and-Terminology.html  |   0
 docs/{latest => 0.20.0}/Configuration.html |   0
 docs/{latest => 0.20.0}/Contribute.html|   0
 docs/{latest => 0.20.0}/Coordinator-Config.html|   0
 docs/{latest => 0.20.0}/Coordinator.html   |   0
 docs/{latest => 0.20.0}/DataSource.html|   0
 .../DataSourceMetadataQuery.html   |   0
 docs/{latest => 0.20.0}/Data_formats.html  |   0
 docs/{latest => 0.20.0}/Deep-Storage.html  |   0
 docs/{latest => 0.20.0}/Design.html|   0
 docs/{latest => 0.20.0}/DimensionSpecs.html|   0
 docs/{latest => 0.20.0}/Download.html  |   0
 .../Druid-Personal-Demo-Cluster.html   |   0
 docs/{latest => 0.20.0}/Druid-vs-Cassandra.html|   0
 .../{latest => 0.20.0}/Druid-vs-Elasticsearch.html |   0
 docs/{latest => 0.20.0}/Druid-vs-Hadoop.html   |   0
 .../Druid-vs-Impala-or-Shark.html  |   0
 docs/{latest => 0.20.0}/Druid-vs-Redshift.html |   0
 docs/{latest => 0.20.0}/Druid-vs-Spark.html|   0
 docs/{latest => 0.20.0}/Druid-vs-Vertica.html  |   0
 docs/{latest => 0.20.0}/Evaluate.html  |   0
 docs/{latest => 0.20.0}/Examples.html  |   0
 docs/{latest => 0.20.0}/Filters.html   |   0
 docs/{latest => 0.20.0}/Firehose.html  |   0
 docs/{latest => 0.20.0}/GeographicQueries.html |   0
 docs/{latest => 0.20.0}/Granularities.html |   0
 docs/{latest => 0.20.0}/GroupByQuery.html  |   0
 docs/{latest => 0.20.0}/Hadoop-Configuration.html  |   0
 docs/{latest => 0.20.0}/Having.html|   0
 docs/{latest => 0.20.0}/Historical-Config.html |   0
 docs/{latest => 0.20.0}/Historical.html|   0
 docs/{latest => 0.20.0}/Home.html  |   0
 docs/{latest => 0.20.0}/Including-Extensions.html  |   0
 .../Indexing-Service-Config.html   |   0
 docs/{latest => 0.20.0}/Indexing-Service.html  |   0
 docs/{latest => 0.20.0}/Ingestion-FAQ.html |   0
 docs/{latest => 0.20.0}/Ingestion-overview.html|   0
 docs/{latest => 0.20.0}/Ingestion.html |   0
 .../Integrating-Druid-With-Other-Technologies.html |   0
 docs/{latest => 0.20.0}/Kafka-Eight.html   |   0
 docs/{latest => 0.20.0}/Libraries.html |   0
 docs/{latest => 0.20.0}/LimitSpec.html |   0
 docs/{latest => 0.20.0}/Loading-Your-Data.html |   0
 docs/{latest => 0.20.0}/Logging.html   |   0
 docs/{latest => 0.20.0}/Master.html|   0
 docs/{latest => 0.20.0}/Metadata-storage.html  |   0
 docs/{latest => 0.20.0}/Metrics.html   |   0
 docs/{latest => 0.20.0}/Middlemanager.html |   0
 docs/{latest => 0.20.0}/Modules.html   |   0
 docs/{latest => 0.20.0}/MySQL.html |   0
 docs/{latest => 0.20.0}/OrderBy.html   |   0
 docs/{latest => 0.20.0}/Other-Hadoop.html  |   0
 docs/{latest => 0.20.0}/Papers-and-talks.html  |   0
 docs/{latest => 0.20.0}/Peons.html |   0
 docs/{latest => 0.20.0}/Performance-FAQ.html   |   0
 docs/{latest => 0.20.0}/Plumber.html   |   0
 docs/{latest => 0.20.0}/Post-aggregations.html |   0
 .../Production-Cluster-Configuration.html  |   0
 docs/{latest => 0.20.0}/Query-Context.html |   0
 docs/{latest => 0.20.0}/Querying-your-data.html|   0
 docs/{latest => 0.20.0}/Querying.html  |   0
 docs/{lat

[druid-website] 01/01: Merge pull request #101 from apache/20docs

2020-10-05 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit 6b6df736785489add3f42a9e4d7bb92608217e69
Merge: 350df82 fab4b1e
Author: Jonathan Wei 
AuthorDate: Mon Oct 5 22:40:59 2020 -0700

Merge pull request #101 from apache/20docs

Add 0.20.0 docs for staging

 blog/2011/04/30/introducing-druid.html |2 +-
 blog/2011/05/20/druid-part-deux.html   |2 +-
 blog/2012/01/19/scaling-the-druid-data-store.html  |2 +-
 ...-right-cardinality-estimation-for-big-data.html |2 +-
 blog/2012/09/21/druid-bitmap-compression.html  |2 +-
 ...ond-hadoop-fast-ad-hoc-queries-on-big-data.html |2 +-
 blog/2012/10/24/introducing-druid.html |2 +-
 .../interactive-queries-meet-real-time-data.html   |2 +-
 blog/2013/04/03/15-minutes-to-live-druid.html  |2 +-
 blog/2013/04/03/druid-r-meetup.html|2 +-
 blog/2013/04/26/meet-the-druid.html|2 +-
 blog/2013/05/10/real-time-for-real.html|2 +-
 blog/2013/08/06/twitter-tutorial.html  |2 +-
 blog/2013/08/30/loading-data.html  |2 +-
 .../12/the-art-of-approximating-distributions.html |2 +-
 blog/2013/09/16/upcoming-events.html   |2 +-
 .../09/19/launching-druid-with-apache-whirr.html   |2 +-
 blog/2013/09/20/druid-at-xldb.html |2 +-
 blog/2013/11/04/querying-your-data.html|2 +-
 blog/2014/02/03/rdruid-and-twitterstream.html  |2 +-
 ...oglog-optimizations-for-real-world-systems.html |2 +-
 blog/2014/03/12/batch-ingestion.html   |2 +-
 blog/2014/03/17/benchmarking-druid.html|2 +-
 blog/2014/04/15/intro-to-pydruid.html  |2 +-
 ...ff-on-the-rise-of-the-real-time-data-stack.html |2 +-
 .../07/23/five-tips-for-a-f-ing-great-logo.html|2 +-
 blog/2015/02/20/towards-a-community-led-druid.html |2 +-
 blog/2015/11/03/seeking-new-committers.html|2 +-
 blog/2016/01/06/announcing-new-committers.html |2 +-
 blog/2016/06/28/druid-0-9-1.html   |2 +-
 blog/2016/12/01/druid-0-9-2.html   |2 +-
 blog/2017/04/18/druid-0-10-0.html  |2 +-
 blog/2017/08/22/druid-0-10-1.html  |2 +-
 blog/2017/12/04/druid-0-11-0.html  |2 +-
 blog/2018/03/08/druid-0-12-0.html  |2 +-
 blog/2018/06/08/druid-0-12-1.html  |2 +-
 blog/index.html|2 +-
 community/cla.html |2 +-
 community/index.html   |2 +-
 css/base.css   |  280 ---
 css/blogs.css  |   68 -
 css/bootstrap-pure.css | 1855 -
 css/docs.css   |  126 --
 css/footer.css |   29 -
 css/header.css |  110 -
 css/index.css  |   50 -
 css/news-list.css  |   63 -
 css/reset.css  |   44 -
 css/syntax.css |  281 ---
 css/variables.css  |0
 .../comparisons/druid-vs-elasticsearch.html|4 +-
 .../comparisons/druid-vs-key-value.html|4 +-
 .../comparisons/druid-vs-kudu.html |4 +-
 .../comparisons/druid-vs-redshift.html |4 +-
 .../comparisons/druid-vs-spark.html|4 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|4 +-
 docs/0.13.0-incubating/configuration/index.html|4 +-
 docs/0.13.0-incubating/configuration/logging.html  |4 +-
 docs/0.13.0-incubating/configuration/realtime.html |4 +-
 .../dependencies/cassandra-deep-storage.html   |4 +-
 .../dependencies/deep-storage.html |4 +-
 .../dependencies/metadata-storage.html |4 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |4 +-
 docs/0.13.0-incubating/design/auth.html|4 +-
 docs/0.13.0-incubating/design/broker.html  |4 +-
 docs/0.13.0-incubating/design/coordinator.html |4 +-
 docs/0.13.0-incubating/design/historical.html  |4 +-
 docs/0.13.0-incubating/design/index.html   |4 +-
 .../0.13.0-incubating/design/indexing-service.html |4 +-
 docs/0.13.0-incubating/design/middlemanager.html   |4 +-
 docs/0.13.0-incubating/design/overlord.html|4 +-
 docs/0.13.0-incubating/design/peons.html   |4 +-
 docs/0.13.0-incubating/design/plumber.html |4 +-
 docs/0.13.0-incubating/design/realtime.html|4

[druid] branch master updated: Suppress CVE-2018-11765 for hadoop dependencies (#10485)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 0aa2a8e  Suppress CVE-2018-11765 for hadoop dependencies (#10485)
0aa2a8e is described below

commit 0aa2a8e2c641aa8eb8722b76b205f70f7bbff8cf
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 21:55:34 2020 -0700

Suppress CVE-2018-11765 for hadoop dependencies (#10485)
---
 owasp-dependency-check-suppressions.xml | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/owasp-dependency-check-suppressions.xml 
b/owasp-dependency-check-suppressions.xml
index 998e5c6..6a532ef 100644
--- a/owasp-dependency-check-suppressions.xml
+++ b/owasp-dependency-check-suppressions.xml
@@ -281,4 +281,11 @@
  CVE-2018-8009
  CVE-2018-8029
   
+  
+ 
+ ^pkg:maven/org\.apache\.hadoop/hadoop\-.*@.*$
+ CVE-2018-11765
+  
 


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated (eb6b2e6 -> 000e0b6)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from eb6b2e6  Allow using jsonpath predicates with AvroFlattener (#10330) 
(#10475)
 add 000e0b6  Web console: Don't include realtime segments in size 
calculations. (#10482) (#10486)

No new revisions were added by this update.

Summary of changes:
 .../src/views/datasource-view/datasource-view.tsx| 16 
 1 file changed, 8 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: Fix compaction task slot computation in auto compaction (#10479) (#10488)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 8a651ee  Fix compaction task slot computation in auto compaction 
(#10479) (#10488)
8a651ee is described below

commit 8a651ee7f077d4682a3b1a3a4a50f9d874e00b1d
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 21:58:06 2020 -0700

Fix compaction task slot computation in auto compaction (#10479) (#10488)

* Fix compaction task slot computation in auto compaction

* add tests for task counting

Co-authored-by: Jihoon Son 
---
 .../parallel/ParallelIndexSupervisorTask.java  |   4 +
 .../parallel/ParallelIndexSupervisorTaskTest.java  |   2 +-
 .../server/coordinator/duty/CompactSegments.java   |  63 +--
 .../coordinator/duty/CompactSegmentsTest.java  | 203 +
 4 files changed, 215 insertions(+), 57 deletions(-)

diff --git 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
index 4a218a0..acac279 100644
--- 
a/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
+++ 
b/indexing-service/src/main/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTask.java
@@ -466,6 +466,10 @@ public class ParallelIndexSupervisorTask extends 
AbstractBatchIndexTask implemen
 registerResourceCloserOnAbnormalExit(currentSubTaskHolder);
   }
 
+  /**
+   * Returns true if this task can run in the parallel mode with the given 
inputSource and tuningConfig.
+   * This method should be synchronized with 
CompactSegments.isParallelMode(ClientCompactionTaskQueryTuningConfig).
+   */
   public static boolean isParallelMode(InputSource inputSource, @Nullable 
ParallelIndexTuningConfig tuningConfig)
   {
 if (null == tuningConfig) {
diff --git 
a/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
 
b/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
index ef1db8a..710df1d 100644
--- 
a/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
+++ 
b/indexing-service/src/test/java/org/apache/druid/indexing/common/task/batch/parallel/ParallelIndexSupervisorTaskTest.java
@@ -249,7 +249,7 @@ public class ParallelIndexSupervisorTaskTest
 }
   }
 
-  public static class staticUtilsTest
+  public static class StaticUtilsTest
   {
 @Test
 public void testIsParallelModeFalse_nullTuningConfig()
diff --git 
a/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
 
b/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
index 50b31ca..3b7ee31 100644
--- 
a/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
+++ 
b/server/src/main/java/org/apache/druid/server/coordinator/duty/CompactSegments.java
@@ -20,6 +20,7 @@
 package org.apache.druid.server.coordinator.duty;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.collect.Maps;
 import com.google.inject.Inject;
 import org.apache.druid.client.indexing.ClientCompactionTaskQuery;
@@ -27,6 +28,7 @@ import 
org.apache.druid.client.indexing.ClientCompactionTaskQueryTuningConfig;
 import org.apache.druid.client.indexing.IndexingServiceClient;
 import org.apache.druid.client.indexing.TaskPayloadResponse;
 import org.apache.druid.indexer.TaskStatusPlus;
+import org.apache.druid.indexer.partitions.SingleDimensionPartitionsSpec;
 import org.apache.druid.java.util.common.ISE;
 import org.apache.druid.java.util.common.logger.Logger;
 import org.apache.druid.server.coordinator.AutoCompactionSnapshot;
@@ -123,8 +125,9 @@ public class CompactSegments implements CoordinatorDuty
 final ClientCompactionTaskQuery compactionTaskQuery = 
(ClientCompactionTaskQuery) response.getPayload();
 final Interval interval = 
compactionTaskQuery.getIoConfig().getInputSpec().getInterval();
 compactionTaskIntervals.computeIfAbsent(status.getDataSource(), k 
-> new ArrayList<>()).add(interval);
-final int numSubTasks = 
findNumMaxConcurrentSubTasks(compactionTaskQuery.getTuningConfig());
-numEstimatedNonCompleteCompactionTasks += numSubTasks + 1; // 
count the compaction task itself
+numEstimatedNonCompleteCompactionTasks += 
findMaxNumTaskSlotsUsedByOneCompactionTask(
+compactionTaskQuery.getTuningConfig()
+);
   } else {
 throw new ISE(&qu

[druid] branch 0.20.0 updated: Web console: fix compaction status when no compaction config, and small cleanup (#10483) (#10487)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 18deb16  Web console: fix compaction status when no compaction config, 
and small cleanup (#10483) (#10487)
18deb16 is described below

commit 18deb1683ad135fbc6d30283572df44c21b9d016
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 23:54:24 2020 -0700

Web console: fix compaction status when no compaction config, and small 
cleanup (#10483) (#10487)

* move timed button to icons

* cleanup redundant logic

* fix compaction status text

* remove extra style

Co-authored-by: Vadim Ogievetsky 
---
 .../components/refresh-button/refresh-button.tsx   |  2 +-
 .../__snapshots__/timed-button.spec.tsx.snap   | 83 --
 .../src/components/timed-button/timed-button.scss  | 21 --
 .../components/timed-button/timed-button.spec.tsx  | 14 ++--
 .../src/components/timed-button/timed-button.tsx   | 44 +++-
 .../lookup-edit-dialog/lookup-edit-dialog.tsx  | 11 +--
 web-console/src/utils/compaction.spec.ts   |  2 +-
 web-console/src/utils/compaction.ts| 17 +++--
 8 files changed, 93 insertions(+), 101 deletions(-)

diff --git a/web-console/src/components/refresh-button/refresh-button.tsx 
b/web-console/src/components/refresh-button/refresh-button.tsx
index 681bd42..04fe160 100644
--- a/web-console/src/components/refresh-button/refresh-button.tsx
+++ b/web-console/src/components/refresh-button/refresh-button.tsx
@@ -42,7 +42,7 @@ export const RefreshButton = React.memo(function 
RefreshButton(props: RefreshBut
   return (
 https://goo.gl/fbAQLP
 
-exports[`Timed button matches snapshot 1`] = `
-
-  
-  
+
+
+  
+}
+defaultIsOpen={false}
+disabled={false}
+fill={false}
+hasBackdrop={false}
+hoverCloseDelay={300}
+hoverOpenDelay={150}
+inheritDarkTheme={true}
+interactionKind="click"
+minimal={false}
+modifiers={Object {}}
+openOnTargetFocus={true}
+position="auto"
+targetTagName="span"
+transitionDuration={300}
+usePortal={true}
+wrapperTagName="span"
   >
-
-  
-
-  
-
-  caret-down
-
-
-  
-
-  
-
-  
-
+
+  
+
 `;
diff --git a/web-console/src/components/timed-button/timed-button.scss 
b/web-console/src/components/timed-button/timed-button.scss
deleted file mode 100644
index f4d7700..000
--- a/web-console/src/components/timed-button/timed-button.scss
+++ /dev/null
@@ -1,21 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-.timed-button {
-  padding: 10px 10px 5px 10px;
-}
diff --git a/web-console/src/components/timed-button/timed-button.spec.tsx 
b/web-console/src/components/timed-button/timed-button.spec.tsx
index e5025fb..61c0ca5 100644
--- a/web-console/src/components/timed-button/timed-button.spec.tsx
+++ b/web-console/src/components/timed-button/timed-button.spec.tsx
@@ -16,22 +16,22 @@
  * limitations under the License.
  */
 
-import { render } from '@testing-library/react';
+import { shallow } from 'enzyme';
 import React from 'react';
 
 import { TimedButton } from './timed-button';
 
-describe('Timed button', () => {
+describe('TimedButton', () => {
   it('matches snapshot', () => {
-const timedButton = (
+const timedButton = shallow(
null}
-label={'label'}
+label={'Select delay'}
 defaultDelay={1000}
-  />
+  />,
 );
-const { container } = render(timedButton);
-expect(container.firstChild).toMatchSnapshot();
+
+expect(timedButton).toMatchSnapshot();
   });
 });
diff --git a/web-console/src/components/timed-button/timed-button.tsx 
b/web-console/src/components/timed-button/timed-button.tsx
index 78a0765..fe7a990 100644
--- a/web-console/src/components/timed-button/tim

[druid] branch 0.20.0 updated (18deb16 -> 4cb5f39)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 18deb16  Web console: fix compaction status when no compaction config, 
and small cleanup (#10483) (#10487)
 add 4cb5f39  Close aggregators in HashVectorGrouper.close() (#10452) 
(#10489)

No new revisions were added by this update.

Summary of changes:
 processing/pom.xml |   5 +
 .../groupby/epinephelinae/HashVectorGrouper.java   |   2 +-
 .../epinephelinae/vector/VectorGroupByEngine.java  |  28 +++---
 ...perTestUtil.java => HashVectorGrouperTest.java} |  34 ---
 .../vector/VectorGroupByEngineIteratorTest.java| 103 +
 .../java/org/apache/druid/segment/TestIndex.java   |   2 +-
 6 files changed, 146 insertions(+), 28 deletions(-)
 copy 
processing/src/test/java/org/apache/druid/query/groupby/epinephelinae/{GrouperTestUtil.java
 => HashVectorGrouperTest.java} (53%)
 create mode 100644 
processing/src/test/java/org/apache/druid/query/groupby/epinephelinae/vector/VectorGroupByEngineIteratorTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: vectorized group by support for nullable numeric columns (#10441) (#10490)

2020-10-07 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new b3b2538  vectorized group by support for nullable numeric columns 
(#10441) (#10490)
b3b2538 is described below

commit b3b25386479f821248584fb87c7903dc8d99cc9e
Author: Jonathan Wei 
AuthorDate: Wed Oct 7 23:56:48 2020 -0700

vectorized group by support for nullable numeric columns (#10441) (#10490)

* vectorized group by support for numeric null columns

* revert unintended change

* adjust

* review stuffs

Co-authored-by: Clint Wylie 
---
 .../VectorValueMatcherColumnProcessorFactory.java  |  18 ++-
 .../epinephelinae/RowBasedGrouperHelper.java   |  67 +---
 .../GroupByVectorColumnProcessorFactory.java   |  51 ++-
 .../NullableDoubleGroupByVectorColumnSelector.java |  82 ++
 .../NullableFloatGroupByVectorColumnSelector.java  |  82 ++
 .../NullableLongGroupByVectorColumnSelector.java   |  82 ++
 .../epinephelinae/vector/VectorGroupByEngine.java  |   2 +-
 .../druid/segment/DimensionHandlerUtils.java   |   5 +
 .../segment/VectorColumnProcessorFactory.java  |  17 ++-
 ...ctorValueMatcherColumnProcessorFactoryTest.java |  67 +++-
 .../query/groupby/GroupByQueryRunnerTest.java  | 169 -
 .../virtual/VectorizedVirtualColumnTest.java   |  13 --
 .../apache/druid/sql/calcite/CalciteQueryTest.java |  44 --
 13 files changed, 590 insertions(+), 109 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
 
b/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
index 5ca511f..b2083cc 100644
--- 
a/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
+++ 
b/processing/src/main/java/org/apache/druid/query/filter/vector/VectorValueMatcherColumnProcessorFactory.java
@@ -20,6 +20,7 @@
 package org.apache.druid.query.filter.vector;
 
 import org.apache.druid.segment.VectorColumnProcessorFactory;
+import org.apache.druid.segment.column.ColumnCapabilities;
 import org.apache.druid.segment.vector.MultiValueDimensionVectorSelector;
 import org.apache.druid.segment.vector.SingleValueDimensionVectorSelector;
 import org.apache.druid.segment.vector.VectorValueSelector;
@@ -40,6 +41,7 @@ public class VectorValueMatcherColumnProcessorFactory 
implements VectorColumnPro
 
   @Override
   public VectorValueMatcherFactory makeSingleValueDimensionProcessor(
+  final ColumnCapabilities capabilities,
   final SingleValueDimensionVectorSelector selector
   )
   {
@@ -48,6 +50,7 @@ public class VectorValueMatcherColumnProcessorFactory 
implements VectorColumnPro
 
   @Override
   public VectorValueMatcherFactory makeMultiValueDimensionProcessor(
+  final ColumnCapabilities capabilities,
   final MultiValueDimensionVectorSelector selector
   )
   {
@@ -55,19 +58,28 @@ public class VectorValueMatcherColumnProcessorFactory 
implements VectorColumnPro
   }
 
   @Override
-  public VectorValueMatcherFactory makeFloatProcessor(final 
VectorValueSelector selector)
+  public VectorValueMatcherFactory makeFloatProcessor(
+  final ColumnCapabilities capabilities,
+  final VectorValueSelector selector
+  )
   {
 return new FloatVectorValueMatcher(selector);
   }
 
   @Override
-  public VectorValueMatcherFactory makeDoubleProcessor(final 
VectorValueSelector selector)
+  public VectorValueMatcherFactory makeDoubleProcessor(
+  final ColumnCapabilities capabilities,
+  final VectorValueSelector selector
+  )
   {
 return new DoubleVectorValueMatcher(selector);
   }
 
   @Override
-  public VectorValueMatcherFactory makeLongProcessor(final VectorValueSelector 
selector)
+  public VectorValueMatcherFactory makeLongProcessor(
+  final ColumnCapabilities capabilities,
+  final VectorValueSelector selector
+  )
   {
 return new LongVectorValueMatcher(selector);
   }
diff --git 
a/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
 
b/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
index d21e609..c099eed 100644
--- 
a/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
+++ 
b/processing/src/main/java/org/apache/druid/query/groupby/epinephelinae/RowBasedGrouperHelper.java
@@ -736,7 +736,6 @@ public class RowBasedGrouperHelper
   {
 private final boolean includeTimestamp;
 private final boolean sortByDimsFirst;
-private final int dimCount;
 private final long maxDictionarySize;
 private final DefaultLimitSpec limitSpec;
 private final List dimensions;
@@ -756,7 +755,6 @@ public class

[druid] branch 0.20.0 updated (b3b2538 -> b3afbb0)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from b3b2538  vectorized group by support for nullable numeric columns 
(#10441) (#10490)
 add b3afbb0  Fix Avro support in Web Console (#10232) (#10491)

No new revisions were added by this update.

Summary of changes:
 docs/development/extensions-core/avro.md   | 21 -
 docs/ingestion/data-formats.md |  6 +++
 .../druid/data/input/avro/AvroFlattenerMaker.java  | 24 --
 .../data/input/AvroStreamInputRowParserTest.java   | 18 +++
 .../data/input/avro/AvroFlattenerMakerTest.java| 12 +++--
 .../druid/data/input/avro/AvroOCFReaderTest.java   | 55 --
 web-console/src/utils/ingestion-spec.spec.ts   | 33 -
 web-console/src/utils/ingestion-spec.tsx   |  4 +-
 .../src/views/load-data-view/load-data-view.tsx|  4 +-
 website/.spelling  |  1 +
 10 files changed, 153 insertions(+), 25 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch 0.20.0 updated: Suppress CVE-2018-11765 for hadoop dependencies (#10485) (#10492)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/0.20.0 by this push:
 new 9a2a9ac  Suppress CVE-2018-11765 for hadoop dependencies (#10485) 
(#10492)
9a2a9ac is described below

commit 9a2a9acb7d34b81feb98b5b4499a4a36f640bdc1
Author: Jonathan Wei 
AuthorDate: Thu Oct 8 01:23:25 2020 -0700

Suppress CVE-2018-11765 for hadoop dependencies (#10485) (#10492)
---
 owasp-dependency-check-suppressions.xml | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/owasp-dependency-check-suppressions.xml 
b/owasp-dependency-check-suppressions.xml
index 998e5c6..6a532ef 100644
--- a/owasp-dependency-check-suppressions.xml
+++ b/owasp-dependency-check-suppressions.xml
@@ -281,4 +281,11 @@
  CVE-2018-8009
  CVE-2018-8029
   
+  
+ 
+ ^pkg:maven/org\.apache\.hadoop/hadoop\-.*@.*$
+ CVE-2018-11765
+  
 


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] annotated tag druid-0.20.0-rc2 updated (acdc6ee -> 49703d5)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to annotated tag druid-0.20.0-rc2
in repository https://gitbox.apache.org/repos/asf/druid.git.


*** WARNING: tag druid-0.20.0-rc2 was modified! ***

from acdc6ee  (commit)
  to 49703d5  (tag)
 tagging acdc6ee7ea3a81fb3e70b92d7cc682921f988eb5 (commit)
 replaces druid-0.8.0-rc1
  by jon-wei
  on Thu Oct 8 21:35:13 2020 -0700

- Log -
[maven-release-plugin] copy for tag druid-0.20.0-rc2
---


No new revisions were added by this update.

Summary of changes:


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



svn commit: r41858 - in /dev/druid: 0.20.0-rc1/ 0.20.0-rc2/

2020-10-08 Thread jonwei
Author: jonwei
Date: Fri Oct  9 06:07:36 2020
New Revision: 41858

Log:
Add 0.20.0-rc2 artifacts

Added:
dev/druid/0.20.0-rc2/
dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz   (with props)
dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc
dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512
dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz   (with props)
dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc
dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512
Removed:
dev/druid/0.20.0-rc1/

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.asc Fri Oct  9 06:07:36 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gAACgkQkoPEkhrE
+SD6LRBAAkZgqvR3U3dUyIRdclf0fQaWh3bpyIAoLEQYy1jytdx0Btfe8dGaVQYQH
+bWdA1r38jn3Uz5eUI5Tn92XWCHr6qRtScB2R2Bnoi+odYWm74JIdeJ0CgnuMlWwv
+/VaFUAV/nyP/8HeH89B/tRKWQgPJKScbRLhi4v4pzvxWeq8OAhGEAU7WKY9RBSY8
+dKldDQct8tH9WsY1K+aNGnNlRwnNwgJ8jOcSTwg/7BZAYG4o3QfTppdQETtdnrkc
+2vJyLLqexgZfAAHhafzfDrkKly1BcfGT/OtFq6kiHnUOJRWx6ZbyEpW062qgE2NP
+aBDYq/J1QlVbqKqCHHWgfmwjx0sQes61y7clF9XmUvtsyeiVa+lsmHmN0ruuOsTJ
+Gda6t02MgR9Zzl6sAapsALKXNsiLmh2Pan3ly+Zg97h8rZUWehU8O2TuCr0CpcfJ
+cUTMbkMDUGYx3NUAqFBWLa4YDqxVu2c94vgY3hnzbiubG7clvljUBPeW72MsEO35
+nKygupcGNFsy8C4WkPHQlai/GymrkQsvvJ/SbNulONSnVOoTf5G3gvdYTYE4Mse4
+oTcFZf4HWLkUzJ5eOzfe/hAbgJBqBJ1Xdo1ozT83JxCU1e45jCYbgce1LhRYxzdi
+nIo0PxKjOWM1XW4UYmRkqAWgtjjKUEucph+jBmAzcaZVZkU0+lg=
+=mnCX
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-bin.tar.gz.sha512 Fri Oct  9 
06:07:36 2020
@@ -0,0 +1 @@
+67fbb50071d09bc3328fe2172ee6f27ce1d960a2ce11fa863a38173897126d5e0009495da3f145d1ad110e65db6d6a7e8f37ea2204a2f76a89e10886f6d2aa41
\ No newline at end of file

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz
==
Binary file - no diff available.

Propchange: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.asc Fri Oct  9 06:07:36 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gEACgkQkoPEkhrE
+SD52pQ/5ATWgPl0VNn1PlASQ6SovmdkRaslukxlcnmpJiGRa7LnLN8O5bvxSQNZj
+XDoHNXGNewDFlaniqLCEMtMwOLjeuY0Lrs0g6v5kiNvw+ygvV1vhTyjE1SiJT5yN
+q2/E4kN3MXl+AZePRNlupE9zq7s0GcmV/8qa/QcELHACfEc5owUHklVQPdD6Ea16
+NsPY+YeELaq9gRGnUka4Sy51yuvfC7jM2hJHhEn6Tze0QkVxHauvXxXr+qKX/XzP
+w3BfGyJi7cXsiTtdPIyazknQQaOd1s15hVFYQre86Uc5oS22AtrMtLHaaxGAAuPc
+YqQgvlx3Rk0RRmNcxUu24IdhjYwXpn+rME6V0XVwPDz7/pTVjzMar90muZKLGyte
+3a3iNu9oB6PtURTvhkpffutouwJxi/JDvMSj4dOIvhFr9v1pzyQMfx9LqsjzhHPP
+1d1qpM3lKBThexheJ/dhllhxS/OzeQ3NTJQFItPxogPbJxQk8j1yCP7v0NlzDcB0
+T9CyIgyoTCSllGBtuRAPZoldiniD4IxX8LN7Ke40Otv8rdM0T79Z0UvQQ/ZJXVpL
+czTDw0ChnSOUeTG2o1tRu8YvlqpiDkaYZeFr6g6uGo5FIccCcqwKFPSu5AN8ZEIc
+ONt6UBMZL23bugGVG1yIKWsMyfHiCME0cfTOYi2y2s6ojt5PMO4=
+=YXA9
+-END PGP SIGNATURE-

Added: dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512
==
--- dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512 (added)
+++ dev/druid/0.20.0-rc2/apache-druid-0.20.0-src.tar.gz.sha512 Fri Oct  9 
06:07:36 2020
@@ -0,0 +1 @@
+15a424cb772ed01c081d09cec0b2798186dc32ea2ce2522a78a5ebd032ec9755f186a67926b7a86a1e6c91885a9ead7b77c8b5c78b06f76bd84ac355c036a43d
\ No newline at end of file



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch 20rc2_update created (now bd206f9)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


  at bd206f9  0.20.0-rc2 updates

This branch includes the following new commits:

 new bd206f9  0.20.0-rc2 updates

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] 01/01: 0.20.0-rc2 updates

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit bb160106366d98022f4340f4336e27e2ed24da24
Author: jon-wei 
AuthorDate: Thu Oct 8 23:36:41 2020 -0700

0.20.0-rc2 updates
---
 community/index.html  |   1 +
 docs/0.20.0/development/extensions-core/avro.html |  13 -
 docs/0.20.0/ingestion/data-formats.html   |   9 +
 docs/latest/development/extensions-core/avro.html |  13 -
 docs/latest/ingestion/data-formats.html   |   9 +
 img/favicon.png   | Bin 4514 -> 1156 bytes
 index.html|  16 
 libraries.html|   1 +
 technology.html   |   2 +-
 use-cases.html|   2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)

diff --git a/community/index.html b/community/index.html
index 685a0d1..ae2159a 100644
--- a/community/index.html
+++ b/community/index.html
@@ -159,6 +159,7 @@ new features, on https://github.com/apache/druid";>GitHub.
 
 https://www.cloudera.com/";>Cloudera
 https://datumo.io/";>Datumo
+https://www.deep.bi/solutions/apache-druid";>Deep.BI
 https://imply.io/";>Imply
 
 
diff --git a/docs/0.20.0/development/extensions-core/avro.html 
b/docs/0.20.0/development/extensions-core/avro.html
index b12a1e7..38a808e 100644
--- a/docs/0.20.0/development/extensions-core/avro.html
+++ b/docs/0.20.0/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for Hadoop 
batch ingestion.
 The inputFormat of inputSpec in 
ioConfig must be set to 
"org.apache.druid.data.input.avro.AvroValueInputFormat".
 You may want to set Avro reader's schema in jobProperties in 
tuningConfig,
@@ -880,6 +886,9 @@ an explicitly defined http://www.joda.org/joda-time/apidocs/org/joda/ti
 
 You need to include the druid-avro-extensions
 as an extension to use the Avro Stream Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for stream ingestion and 
reads Avro data from a stream directly.
 
 
diff --git a/docs/latest/development/extensions-core/avro.html 
b/docs/latest/development/extensions-core/avro.html
index 1b8a32b..32e689b 100644
--- a/docs/latest/development/extensions-core/avro.html
+++ b/docs/latest/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in

[druid-website-src] 01/01: 0.20.0-rc2 updates

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit bd206f9f00961d660fc43f6b38d98b6c9dbd261e
Author: jon-wei 
AuthorDate: Thu Oct 8 23:36:01 2020 -0700

0.20.0-rc2 updates
---
 docs/0.20.0/development/extensions-core/avro.html | 13 -
 docs/0.20.0/ingestion/data-formats.html   |  9 +
 docs/latest/development/extensions-core/avro.html | 13 -
 docs/latest/ingestion/data-formats.html   |  9 +
 4 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/docs/0.20.0/development/extensions-core/avro.html 
b/docs/0.20.0/development/extensions-core/avro.html
index b12a1e7..38a808e 100644
--- a/docs/0.20.0/development/extensions-core/avro.html
+++ b/docs/0.20.0/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for Hadoop 
batch ingestion.
 The inputFormat of inputSpec in 
ioConfig must be set to 
"org.apache.druid.data.input.avro.AvroValueInputFormat".
 You may want to set Avro reader's schema in jobProperties in 
tuningConfig,
@@ -880,6 +886,9 @@ an explicitly defined http://www.joda.org/joda-time/apidocs/org/joda/ti
 
 You need to include the druid-avro-extensions
 as an extension to use the Avro Stream Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for stream ingestion and 
reads Avro data from a stream directly.
 
 
diff --git a/docs/latest/development/extensions-core/avro.html 
b/docs/latest/development/extensions-core/avro.html
index 1b8a32b..32e689b 100644
--- a/docs/latest/development/extensions-core/avro.html
+++ b/docs/latest/development/extensions-core/avro.html
@@ -82,8 +82,19 @@
 two Avro Parsers for stream ingestion and Hadoop batch ingestion.
 See Avro 
Hadoop Parser and Avro Stream 
Parser
 for more details about how to use these in an ingestion spec.
+Additionally, it provides an InputFormat for reading Avro OCF files when 
using
+native batch indexing, 
see Avro OCF
+for details on how to ingest OCF files.
 Make sure to include 
druid-avro-extensions as an extension.
-← Approximate Histogram 
aggregatorsMicrosoft 
Azure →flattenSpec on 
the parser.
+Druid doesn't currently support Avro logical types, they will be ignored 
and fields will be handled according to the underlying primitive type.
+← Approximate Histogram 
aggregatorsMicrosoft 
Azure →druid-avro-extensions
 as an extension to use the Avro OCF input format.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 The inputFormat to load data of Avro OCF format. An example 
is:
 "ioConfig": {
   "inputFormat": {
@@ -383,6 +386,9 @@ Each line can be further parsed using parseSpec
 You need to include the druid-avro-extensions
 as an extension to use the Avro Hadoop Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for Hadoop 
batch ingestion.
 The inputFormat of inputSpec in 
ioConfig must be set to 
"org.apache.druid.data.input.avro.AvroValueInputFormat".
 You may want to set Avro reader's schema in jobProperties in 
tuningConfig,
@@ -880,6 +886,9 @@ an explicitly defined http://www.joda.org/joda-time/apidocs/org/joda/ti
 
 You need to include the druid-avro-extensions
 as an extension to use the Avro Stream Parser.
 
+
+See the Avro 
Types section for how Avro types are handled in Druid
+
 This parser is for stream ingestion and 
reads Avro data from a stream directly.
 
 


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch 20rc2_update created (now bb16010)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20rc2_update
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


  at bb16010  0.20.0-rc2 updates

This branch includes the following new commits:

 new bb16010  0.20.0-rc2 updates

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch master updated (7ccaba8 -> 8caaf92)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


from 7ccaba8  Merge pull request #176 from implydata/better-favicon
 add bd206f9  0.20.0-rc2 updates
 new 8caaf92  Merge pull request #177 from apache/20rc2_update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/0.20.0/development/extensions-core/avro.html | 13 -
 docs/0.20.0/ingestion/data-formats.html   |  9 +
 docs/latest/development/extensions-core/avro.html | 13 -
 docs/latest/ingestion/data-formats.html   |  9 +
 4 files changed, 42 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] 01/01: Merge pull request #104 from apache/20rc2_update

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit ca36d49471d7fdd4e2c64cec9aa35281defebed0
Merge: 6b6df73 bb16010
Author: Jonathan Wei 
AuthorDate: Thu Oct 8 23:38:21 2020 -0700

Merge pull request #104 from apache/20rc2_update

0.20.0-rc2 updates

 community/index.html  |   1 +
 docs/0.20.0/development/extensions-core/avro.html |  13 -
 docs/0.20.0/ingestion/data-formats.html   |   9 +
 docs/latest/development/extensions-core/avro.html |  13 -
 docs/latest/ingestion/data-formats.html   |   9 +
 img/favicon.png   | Bin 4514 -> 1156 bytes
 index.html|  16 
 libraries.html|   1 +
 technology.html   |   2 +-
 use-cases.html|   2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] 01/01: Merge pull request #177 from apache/20rc2_update

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 8caaf924080ac8284addd09532356d7f30f092b8
Merge: 7ccaba8 bd206f9
Author: Jonathan Wei 
AuthorDate: Thu Oct 8 23:38:17 2020 -0700

Merge pull request #177 from apache/20rc2_update

0.20.0-rc2 updates

 docs/0.20.0/development/extensions-core/avro.html | 13 -
 docs/0.20.0/ingestion/data-formats.html   |  9 +
 docs/latest/development/extensions-core/avro.html | 13 -
 docs/latest/ingestion/data-formats.html   |  9 +
 4 files changed, 42 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch asf-staging updated (6b6df73 -> ca36d49)

2020-10-08 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


from 6b6df73  Merge pull request #101 from apache/20docs
 add bb16010  0.20.0-rc2 updates
 new ca36d49  Merge pull request #104 from apache/20rc2_update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community/index.html  |   1 +
 docs/0.20.0/development/extensions-core/avro.html |  13 -
 docs/0.20.0/ingestion/data-formats.html   |   9 +
 docs/latest/development/extensions-core/avro.html |  13 -
 docs/latest/ingestion/data-formats.html   |   9 +
 img/favicon.png   | Bin 4514 -> 1156 bytes
 index.html|  16 
 libraries.html|   1 +
 technology.html   |   2 +-
 use-cases.html|   2 +-
 10 files changed, 54 insertions(+), 12 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated: Any virtual column on "__time" should be a pre-join virtual column (#10451)

2020-10-12 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
 new 567e381  Any virtual column on "__time" should be a pre-join virtual 
column (#10451)
567e381 is described below

commit 567e38170500d3649cbfaa28cf7aa6f5275d02e7
Author: Abhishek Agarwal <1477457+abhishekagarwa...@users.noreply.github.com>
AuthorDate: Tue Oct 13 01:34:55 2020 +0530

Any virtual column on "__time" should be a pre-join virtual column (#10451)

* Virtual column on __time should be in pre-join

* Add unit test
---
 .../join/HashJoinSegmentStorageAdapter.java|  2 +
 .../BaseHashJoinSegmentStorageAdapterTest.java | 17 +++
 .../join/HashJoinSegmentStorageAdapterTest.java| 53 --
 3 files changed, 59 insertions(+), 13 deletions(-)

diff --git 
a/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
 
b/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
index 03f3f94..d6517c1 100644
--- 
a/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
+++ 
b/processing/src/main/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapter.java
@@ -34,6 +34,7 @@ import org.apache.druid.segment.StorageAdapter;
 import org.apache.druid.segment.VirtualColumn;
 import org.apache.druid.segment.VirtualColumns;
 import org.apache.druid.segment.column.ColumnCapabilities;
+import org.apache.druid.segment.column.ColumnHolder;
 import org.apache.druid.segment.data.Indexed;
 import org.apache.druid.segment.data.ListIndexed;
 import org.apache.druid.segment.join.filter.JoinFilterAnalyzer;
@@ -305,6 +306,7 @@ public class HashJoinSegmentStorageAdapter implements 
StorageAdapter
   )
   {
 final Set baseColumns = new HashSet<>();
+baseColumns.add(ColumnHolder.TIME_COLUMN_NAME);
 Iterables.addAll(baseColumns, baseAdapter.getAvailableDimensions());
 Iterables.addAll(baseColumns, baseAdapter.getAvailableMetrics());
 
diff --git 
a/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
 
b/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
index 6a6af72..d5dc9a2 100644
--- 
a/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
+++ 
b/processing/src/test/java/org/apache/druid/segment/join/BaseHashJoinSegmentStorageAdapterTest.java
@@ -27,7 +27,9 @@ import org.apache.druid.query.QueryContexts;
 import org.apache.druid.query.filter.Filter;
 import org.apache.druid.query.lookup.LookupExtractor;
 import org.apache.druid.segment.QueryableIndexSegment;
+import org.apache.druid.segment.VirtualColumn;
 import org.apache.druid.segment.VirtualColumns;
+import org.apache.druid.segment.column.ValueType;
 import org.apache.druid.segment.join.filter.JoinFilterAnalyzer;
 import org.apache.druid.segment.join.filter.JoinFilterPreAnalysis;
 import org.apache.druid.segment.join.filter.JoinFilterPreAnalysisKey;
@@ -242,4 +244,19 @@ public class BaseHashJoinSegmentStorageAdapterTest
 )
 );
   }
+
+  protected VirtualColumn makeExpressionVirtualColumn(String expression)
+  {
+return makeExpressionVirtualColumn(expression, "virtual");
+  }
+
+  protected VirtualColumn makeExpressionVirtualColumn(String expression, 
String columnName)
+  {
+return new ExpressionVirtualColumn(
+columnName,
+expression,
+ValueType.STRING,
+ExprMacroTable.nil()
+);
+  }
 }
diff --git 
a/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
 
b/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
index 6406d7a..5546962 100644
--- 
a/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
+++ 
b/processing/src/test/java/org/apache/druid/segment/join/HashJoinSegmentStorageAdapterTest.java
@@ -31,6 +31,7 @@ import org.apache.druid.query.filter.ExpressionDimFilter;
 import org.apache.druid.query.filter.Filter;
 import org.apache.druid.query.filter.OrDimFilter;
 import org.apache.druid.query.filter.SelectorDimFilter;
+import org.apache.druid.segment.VirtualColumn;
 import org.apache.druid.segment.VirtualColumns;
 import org.apache.druid.segment.column.ColumnCapabilities;
 import org.apache.druid.segment.column.ValueType;
@@ -38,10 +39,10 @@ import org.apache.druid.segment.filter.SelectorFilter;
 import org.apache.druid.segment.join.filter.JoinFilterPreAnalysis;
 import org.apache.druid.segment.join.lookup.LookupJoinable;
 import org.apache.druid.segment.join.table.IndexedTableJoinable;
-import org.apache.druid.segment.virtual.ExpressionVirtualColumn;
 import org.junit.Assert;
 import org.junit.Te

[druid] branch 0.20.0 updated (9a2a9ac -> ae5521a)

2020-10-15 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 9a2a9ac  Suppress CVE-2018-11765 for hadoop dependencies (#10485) 
(#10492)
 add ae5521a  [Backport] Add docs for Auto-compaction snapshot status API 
(#10510) (#10514)

No new revisions were added by this update.

Summary of changes:
 docs/operations/api-reference.md | 26 ++
 docs/operations/metrics.md   | 14 +-
 2 files changed, 39 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



svn commit: r41943 - /release/druid/0.20.0/

2020-10-15 Thread jonwei
Author: jonwei
Date: Thu Oct 15 22:31:38 2020
New Revision: 41943

Log:
Add 0.20.0 artifacts

Added:
release/druid/0.20.0/
release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz   (with props)
release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc
release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512
release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz   (with props)
release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc
release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512

Added: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz
==
Binary file - no diff available.

Propchange: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc
==
--- release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.asc Thu Oct 15 22:31:38 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gAACgkQkoPEkhrE
+SD6LRBAAkZgqvR3U3dUyIRdclf0fQaWh3bpyIAoLEQYy1jytdx0Btfe8dGaVQYQH
+bWdA1r38jn3Uz5eUI5Tn92XWCHr6qRtScB2R2Bnoi+odYWm74JIdeJ0CgnuMlWwv
+/VaFUAV/nyP/8HeH89B/tRKWQgPJKScbRLhi4v4pzvxWeq8OAhGEAU7WKY9RBSY8
+dKldDQct8tH9WsY1K+aNGnNlRwnNwgJ8jOcSTwg/7BZAYG4o3QfTppdQETtdnrkc
+2vJyLLqexgZfAAHhafzfDrkKly1BcfGT/OtFq6kiHnUOJRWx6ZbyEpW062qgE2NP
+aBDYq/J1QlVbqKqCHHWgfmwjx0sQes61y7clF9XmUvtsyeiVa+lsmHmN0ruuOsTJ
+Gda6t02MgR9Zzl6sAapsALKXNsiLmh2Pan3ly+Zg97h8rZUWehU8O2TuCr0CpcfJ
+cUTMbkMDUGYx3NUAqFBWLa4YDqxVu2c94vgY3hnzbiubG7clvljUBPeW72MsEO35
+nKygupcGNFsy8C4WkPHQlai/GymrkQsvvJ/SbNulONSnVOoTf5G3gvdYTYE4Mse4
+oTcFZf4HWLkUzJ5eOzfe/hAbgJBqBJ1Xdo1ozT83JxCU1e45jCYbgce1LhRYxzdi
+nIo0PxKjOWM1XW4UYmRkqAWgtjjKUEucph+jBmAzcaZVZkU0+lg=
+=mnCX
+-END PGP SIGNATURE-

Added: release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512
==
--- release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512 (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-bin.tar.gz.sha512 Thu Oct 15 
22:31:38 2020
@@ -0,0 +1 @@
+67fbb50071d09bc3328fe2172ee6f27ce1d960a2ce11fa863a38173897126d5e0009495da3f145d1ad110e65db6d6a7e8f37ea2204a2f76a89e10886f6d2aa41
\ No newline at end of file

Added: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz
==
Binary file - no diff available.

Propchange: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc
==
--- release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.asc Thu Oct 15 22:31:38 
2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCgAdFiEEBJXgMcNdEyR5wkHskoPEkhrESD4FAl9/8gEACgkQkoPEkhrE
+SD52pQ/5ATWgPl0VNn1PlASQ6SovmdkRaslukxlcnmpJiGRa7LnLN8O5bvxSQNZj
+XDoHNXGNewDFlaniqLCEMtMwOLjeuY0Lrs0g6v5kiNvw+ygvV1vhTyjE1SiJT5yN
+q2/E4kN3MXl+AZePRNlupE9zq7s0GcmV/8qa/QcELHACfEc5owUHklVQPdD6Ea16
+NsPY+YeELaq9gRGnUka4Sy51yuvfC7jM2hJHhEn6Tze0QkVxHauvXxXr+qKX/XzP
+w3BfGyJi7cXsiTtdPIyazknQQaOd1s15hVFYQre86Uc5oS22AtrMtLHaaxGAAuPc
+YqQgvlx3Rk0RRmNcxUu24IdhjYwXpn+rME6V0XVwPDz7/pTVjzMar90muZKLGyte
+3a3iNu9oB6PtURTvhkpffutouwJxi/JDvMSj4dOIvhFr9v1pzyQMfx9LqsjzhHPP
+1d1qpM3lKBThexheJ/dhllhxS/OzeQ3NTJQFItPxogPbJxQk8j1yCP7v0NlzDcB0
+T9CyIgyoTCSllGBtuRAPZoldiniD4IxX8LN7Ke40Otv8rdM0T79Z0UvQQ/ZJXVpL
+czTDw0ChnSOUeTG2o1tRu8YvlqpiDkaYZeFr6g6uGo5FIccCcqwKFPSu5AN8ZEIc
+ONt6UBMZL23bugGVG1yIKWsMyfHiCME0cfTOYi2y2s6ojt5PMO4=
+=YXA9
+-END PGP SIGNATURE-

Added: release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512
==
--- release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512 (added)
+++ release/druid/0.20.0/apache-druid-0.20.0-src.tar.gz.sha512 Thu Oct 15 
22:31:38 2020
@@ -0,0 +1 @@
+15a424cb772ed01c081d09cec0b2798186dc32ea2ce2522a78a5ebd032ec9755f186a67926b7a86a1e6c91885a9ead7b77c8b5c78b06f76bd84ac355c036a43d
\ No newline at end of file



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch 20_release created (now 3e20479)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20_release
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


  at 3e20479  0.20.0 release update

This branch includes the following new commits:

 new 3e20479  0.20.0 release update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] 01/01: 0.20.0 release update

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch 20_release
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 3e2047984d032dd066479a7cc2ad3c9d4ab2bf0d
Author: jon-wei 
AuthorDate: Fri Oct 16 17:38:56 2020 -0700

0.20.0 release update
---
 _config.yml   |  8 ++---
 docs/0.20.0/operations/api-reference.html | 50 +++
 docs/0.20.0/operations/metrics.html   | 12 
 docs/latest/operations/api-reference.html | 50 +++
 docs/latest/operations/metrics.html   | 12 
 5 files changed, 104 insertions(+), 28 deletions(-)

diff --git a/_config.yml b/_config.yml
index b0ab908..2c3165e 100644
--- a/_config.yml
+++ b/_config.yml
@@ -26,14 +26,14 @@ description: 'Real²time Exploratory Analytics on Large 
Datasets'
 
 
 druid_versions:
+  - release: 0.20
+versions:
+  - version: 0.20.0
+date: 2020-10-16
   - release: 0.19
 versions:
   - version: 0.19.0
 date: 2020-07-21
-  - release: 0.18
-versions:
-  - version: 0.18.1
-date: 2020-05-13
 
 tranquility_stable_version: 0.8.3
 
diff --git a/docs/0.20.0/operations/api-reference.html 
b/docs/0.20.0/operations/api-reference.html
index 89bacbf..7f8a71d 100644
--- a/docs/0.20.0/operations/api-reference.html
+++ b/docs/0.20.0/operations/api-reference.html
@@ -429,9 +429,35 @@ result of this API call.
 
 Returns the total size of segments awaiting compaction for the given 
dataSource.
 This is only valid for dataSource which has compaction enabled.
-Compa
 
 Removes the compaction config for a dataSource.
 coordinator setting
 which automates this operation to perform periodically.
 Overlord 
Dynamic Configuration for details.
 Note that all interval URL parameters are ISO 8601 strings 
delimited by a _ instead of a /
 (e.g., 2016-06-27_2016-06-28).
-three-server 
configuration.
 {"task":"index_kafka_wikiticker_f7011f8ffba384b_fpeclode"}
 
 task reports for more 
details.
 dynamic 
configuration, then log 
entries for class
diff --git a/docs/latest/operations/api-reference.html 
b/docs/latest/operations/api-reference.html
index 9c46416..dc7a2d0 100644
--- a/docs/latest/operations/api-reference.html
+++ b/docs/latest/operations/api-reference.html
@@ -429,9 +429,35 @@ result of this API call.
 
 Returns the total size of segments awaiting compaction for the given 
dataSource.
 This is only valid for dataSource which has compaction enabled.
-Compa
 
 Removes the compaction config for a dataSource.
 coordinator setting
 which automates this operation to perform periodically.
 Overlord 
Dynamic Configuration for details.
 Note that all interval URL parameters are ISO 8601 strings 
delimited by a _ instead of a /
 (e.g., 2016-06-27_2016-06-28).
-three-server 
configuration.
 {"task":"index_kafka_wikiticker_f7011f8ffba384b_fpeclode"}
 
 task reports for more 
details.
 dynamic 
configuration, then log 
entries for class


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch 20_release created (now 520c3e4)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch 20_release
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


  at 520c3e4  0.20.0 release update

This branch includes the following new commits:

 new 520c3e4  0.20.0 release update

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] 01/01: Merge pull request #179 from apache/20_release

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git

commit 0de3a2390ded3d90212bce3916c9d0264b89e3e7
Merge: 8caaf92 3e20479
Author: Jonathan Wei 
AuthorDate: Fri Oct 16 17:50:04 2020 -0700

Merge pull request #179 from apache/20_release

0.20.0 release update

 _config.yml   |  8 ++---
 docs/0.20.0/operations/api-reference.html | 50 +++
 docs/0.20.0/operations/metrics.html   | 12 
 docs/latest/operations/api-reference.html | 50 +++
 docs/latest/operations/metrics.html   | 12 
 5 files changed, 104 insertions(+), 28 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website-src] branch master updated (8caaf92 -> 0de3a23)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid-website-src.git.


from 8caaf92  Merge pull request #177 from apache/20rc2_update
 add 3e20479  0.20.0 release update
 new 0de3a23  Merge pull request #179 from apache/20_release

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 _config.yml   |  8 ++---
 docs/0.20.0/operations/api-reference.html | 50 +++
 docs/0.20.0/operations/metrics.html   | 12 
 docs/latest/operations/api-reference.html | 50 +++
 docs/latest/operations/metrics.html   | 12 
 5 files changed, 104 insertions(+), 28 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid-website] branch asf-site updated (99746f8 -> 1da765b)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/druid-website.git.


from 99746f8  Autobuild (#103)
 add 520c3e4  0.20.0 release update
 new 1da765b  Merge pull request #105 from apache/20_release

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 community/index.html   |1 +
 .../comparisons/druid-vs-elasticsearch.html|2 +-
 .../comparisons/druid-vs-key-value.html|2 +-
 .../comparisons/druid-vs-kudu.html |2 +-
 .../comparisons/druid-vs-redshift.html |2 +-
 .../comparisons/druid-vs-spark.html|2 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|2 +-
 docs/0.13.0-incubating/configuration/index.html|2 +-
 docs/0.13.0-incubating/configuration/logging.html  |2 +-
 docs/0.13.0-incubating/configuration/realtime.html |2 +-
 .../dependencies/cassandra-deep-storage.html   |2 +-
 .../dependencies/deep-storage.html |2 +-
 .../dependencies/metadata-storage.html |2 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |2 +-
 docs/0.13.0-incubating/design/auth.html|2 +-
 docs/0.13.0-incubating/design/broker.html  |2 +-
 docs/0.13.0-incubating/design/coordinator.html |2 +-
 docs/0.13.0-incubating/design/historical.html  |2 +-
 docs/0.13.0-incubating/design/index.html   |2 +-
 .../0.13.0-incubating/design/indexing-service.html |2 +-
 docs/0.13.0-incubating/design/middlemanager.html   |2 +-
 docs/0.13.0-incubating/design/overlord.html|2 +-
 docs/0.13.0-incubating/design/peons.html   |2 +-
 docs/0.13.0-incubating/design/plumber.html |2 +-
 docs/0.13.0-incubating/design/realtime.html|2 +-
 docs/0.13.0-incubating/design/segments.html|2 +-
 docs/0.13.0-incubating/development/build.html  |2 +-
 .../development/experimental.html  |2 +-
 .../extensions-contrib/ambari-metrics-emitter.html |2 +-
 .../development/extensions-contrib/azure.html  |2 +-
 .../development/extensions-contrib/cassandra.html  |2 +-
 .../development/extensions-contrib/cloudfiles.html |2 +-
 .../extensions-contrib/distinctcount.html  |2 +-
 .../development/extensions-contrib/google.html |2 +-
 .../development/extensions-contrib/graphite.html   |2 +-
 .../development/extensions-contrib/influx.html |2 +-
 .../extensions-contrib/kafka-emitter.html  |2 +-
 .../extensions-contrib/kafka-simple.html   |2 +-
 .../extensions-contrib/materialized-view.html  |2 +-
 .../extensions-contrib/opentsdb-emitter.html   |2 +-
 .../development/extensions-contrib/orc.html|2 +-
 .../development/extensions-contrib/parquet.html|2 +-
 .../development/extensions-contrib/rabbitmq.html   |2 +-
 .../extensions-contrib/redis-cache.html|2 +-
 .../development/extensions-contrib/rocketmq.html   |2 +-
 .../development/extensions-contrib/sqlserver.html  |2 +-
 .../development/extensions-contrib/statsd.html |2 +-
 .../development/extensions-contrib/thrift.html |2 +-
 .../extensions-contrib/time-min-max.html   |2 +-
 .../extensions-core/approximate-histograms.html|2 +-
 .../development/extensions-core/avro.html  |2 +-
 .../development/extensions-core/bloom-filter.html  |2 +-
 .../extensions-core/datasketches-extension.html|2 +-
 .../extensions-core/datasketches-hll.html  |2 +-
 .../extensions-core/datasketches-quantiles.html|2 +-
 .../extensions-core/datasketches-theta.html|2 +-
 .../extensions-core/datasketches-tuple.html|2 +-
 .../extensions-core/druid-basic-security.html  |2 +-
 .../extensions-core/druid-kerberos.html|2 +-
 .../development/extensions-core/druid-lookups.html |2 +-
 .../development/extensions-core/examples.html  |2 +-
 .../development/extensions-core/hdfs.html  |2 +-
 .../extensions-core/kafka-eight-firehose.html  |2 +-
 .../kafka-extraction-namespace.html|2 +-
 .../extensions-core/kafka-ingestion.html   |2 +-
 .../extensions-core/lookups-cached-global.html |2 +-
 .../development/extensions-core/mysql.html |2 +-
 .../development/extensions-core/postgresql.html|2 +-
 .../development/extensions-core/protobuf.html  |2 +-
 .../development/extensions-core/s3.html|2 +-
 .../extensions-core/simple-client-sslcontext.html  |2

[druid-website] 01/01: Merge pull request #105 from apache/20_release

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/druid-website.git

commit 1da765b25b0df51ae5d1db14ca5b0de2b570a6b2
Merge: 99746f8 520c3e4
Author: Jonathan Wei 
AuthorDate: Fri Oct 16 17:57:38 2020 -0700

Merge pull request #105 from apache/20_release

0.20.0 release update

 community/index.html   |1 +
 .../comparisons/druid-vs-elasticsearch.html|2 +-
 .../comparisons/druid-vs-key-value.html|2 +-
 .../comparisons/druid-vs-kudu.html |2 +-
 .../comparisons/druid-vs-redshift.html |2 +-
 .../comparisons/druid-vs-spark.html|2 +-
 .../comparisons/druid-vs-sql-on-hadoop.html|2 +-
 docs/0.13.0-incubating/configuration/index.html|2 +-
 docs/0.13.0-incubating/configuration/logging.html  |2 +-
 docs/0.13.0-incubating/configuration/realtime.html |2 +-
 .../dependencies/cassandra-deep-storage.html   |2 +-
 .../dependencies/deep-storage.html |2 +-
 .../dependencies/metadata-storage.html |2 +-
 docs/0.13.0-incubating/dependencies/zookeeper.html |2 +-
 docs/0.13.0-incubating/design/auth.html|2 +-
 docs/0.13.0-incubating/design/broker.html  |2 +-
 docs/0.13.0-incubating/design/coordinator.html |2 +-
 docs/0.13.0-incubating/design/historical.html  |2 +-
 docs/0.13.0-incubating/design/index.html   |2 +-
 .../0.13.0-incubating/design/indexing-service.html |2 +-
 docs/0.13.0-incubating/design/middlemanager.html   |2 +-
 docs/0.13.0-incubating/design/overlord.html|2 +-
 docs/0.13.0-incubating/design/peons.html   |2 +-
 docs/0.13.0-incubating/design/plumber.html |2 +-
 docs/0.13.0-incubating/design/realtime.html|2 +-
 docs/0.13.0-incubating/design/segments.html|2 +-
 docs/0.13.0-incubating/development/build.html  |2 +-
 .../development/experimental.html  |2 +-
 .../extensions-contrib/ambari-metrics-emitter.html |2 +-
 .../development/extensions-contrib/azure.html  |2 +-
 .../development/extensions-contrib/cassandra.html  |2 +-
 .../development/extensions-contrib/cloudfiles.html |2 +-
 .../extensions-contrib/distinctcount.html  |2 +-
 .../development/extensions-contrib/google.html |2 +-
 .../development/extensions-contrib/graphite.html   |2 +-
 .../development/extensions-contrib/influx.html |2 +-
 .../extensions-contrib/kafka-emitter.html  |2 +-
 .../extensions-contrib/kafka-simple.html   |2 +-
 .../extensions-contrib/materialized-view.html  |2 +-
 .../extensions-contrib/opentsdb-emitter.html   |2 +-
 .../development/extensions-contrib/orc.html|2 +-
 .../development/extensions-contrib/parquet.html|2 +-
 .../development/extensions-contrib/rabbitmq.html   |2 +-
 .../extensions-contrib/redis-cache.html|2 +-
 .../development/extensions-contrib/rocketmq.html   |2 +-
 .../development/extensions-contrib/sqlserver.html  |2 +-
 .../development/extensions-contrib/statsd.html |2 +-
 .../development/extensions-contrib/thrift.html |2 +-
 .../extensions-contrib/time-min-max.html   |2 +-
 .../extensions-core/approximate-histograms.html|2 +-
 .../development/extensions-core/avro.html  |2 +-
 .../development/extensions-core/bloom-filter.html  |2 +-
 .../extensions-core/datasketches-extension.html|2 +-
 .../extensions-core/datasketches-hll.html  |2 +-
 .../extensions-core/datasketches-quantiles.html|2 +-
 .../extensions-core/datasketches-theta.html|2 +-
 .../extensions-core/datasketches-tuple.html|2 +-
 .../extensions-core/druid-basic-security.html  |2 +-
 .../extensions-core/druid-kerberos.html|2 +-
 .../development/extensions-core/druid-lookups.html |2 +-
 .../development/extensions-core/examples.html  |2 +-
 .../development/extensions-core/hdfs.html  |2 +-
 .../extensions-core/kafka-eight-firehose.html  |2 +-
 .../kafka-extraction-namespace.html|2 +-
 .../extensions-core/kafka-ingestion.html   |2 +-
 .../extensions-core/lookups-cached-global.html |2 +-
 .../development/extensions-core/mysql.html |2 +-
 .../development/extensions-core/postgresql.html|2 +-
 .../development/extensions-core/protobuf.html  |2 +-
 .../development/extensions-core/s3.html|2 +-
 .../extensions-core/simple-client-sslcontext.html  |2 +-
 .../development/extensions-core/stats.html |2 +-
 .../development/extensions-core/test-stats.html|2 +-
 docs/0.13.0-incubating/development/extensions.html |2 +-
 docs

[druid] tag druid-0.20.0 created (now acdc6ee)

2020-10-16 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to tag druid-0.20.0
in repository https://gitbox.apache.org/repos/asf/druid.git.


  at acdc6ee  (commit)
No new revisions were added by this update.


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (8366717 -> cd231d8)

2020-11-09 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 8366717  Add missing coordinator dynamic config to the web-console 
dialog for dynamic coordinator config (#10545)
 add cd231d8  Run integration test queries once (#10564)

No new revisions were added by this update.

Summary of changes:
 .../testing/utils/AbstractTestQueryHelper.java | 69 +++---
 .../coordinator/duty/ITAutoCompactionTest.java |  2 +-
 .../tests/indexer/AbstractITBatchIndexTest.java| 10 ++--
 .../indexer/AbstractITRealtimeIndexTaskTest.java   |  4 +-
 .../tests/indexer/AbstractStreamIndexingTest.java  |  4 +-
 .../tests/indexer/ITAppendBatchIndexTest.java  |  6 +-
 .../druid/tests/indexer/ITCompactionTaskTest.java  |  4 +-
 .../tests/indexer/ITNestedQueryPushDownTest.java   |  2 +-
 .../tests/query/ITBroadcastJoinQueryTest.java  |  6 +-
 .../druid/tests/query/ITSystemTableQueryTest.java  |  2 +-
 .../druid/tests/query/ITTwitterQueryTest.java  |  2 +-
 .../apache/druid/tests/query/ITUnionQueryTest.java |  4 +-
 .../druid/tests/query/ITWikipediaQueryTest.java|  2 +-
 13 files changed, 56 insertions(+), 61 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (2f4d6da -> ba915b7)

2020-11-19 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 2f4d6da  Updates segment metadata query documentation  (#10589)
 add ba915b7  Security overview documentation (#10339)

No new revisions were added by this update.

Summary of changes:
 docs/assets/security-model-1.png   | Bin 0 -> 85098 bytes
 docs/assets/security-model-2.png   | Bin 0 -> 29613 bytes
 .../extensions-core/druid-basic-security.md| 101 +---
 docs/operations/auth-ldap.md   | 196 +++
 docs/operations/password-provider.md   |  21 +-
 docs/operations/security-overview.md   | 265 +
 docs/operations/security-user-auth.md  | 151 
 docs/operations/tls-support.md |  17 +-
 docs/querying/sql.md   |   2 +-
 website/.spelling  |   4 +
 website/i18n/en.json   |  12 +
 website/sidebars.json  |  35 ++-
 12 files changed, 677 insertions(+), 127 deletions(-)
 create mode 100644 docs/assets/security-model-1.png
 create mode 100644 docs/assets/security-model-2.png
 create mode 100644 docs/operations/auth-ldap.md
 create mode 100644 docs/operations/security-overview.md
 create mode 100644 docs/operations/security-user-auth.md


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[druid] branch master updated (52d46ce -> 7eb5f59)

2020-12-04 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git.


from 52d46ce  Move common configurations to TuningConfig (#10478)
 add 7eb5f59  Fix string byte calculation in StringDimensionIndexer (#10623)

No new revisions were added by this update.

Summary of changes:
 .../druid/segment/StringDimensionIndexer.java  | 10 +++--
 .../incremental/IncrementalIndexRowSizeTest.java   | 45 ++
 .../realtime/appenderator/AppenderatorTest.java| 16 
 3 files changed, 52 insertions(+), 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated (c44452f -> d091347)

2019-11-19 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git.


from c44452f  Tidy up lifecycle, query, and ingestion logging. (#8889)
 add d091347  sampler returns nulls in CSV (#8871)

No new revisions were added by this update.

Summary of changes:
 .../druid/indexing/kafka/KafkaSamplerSpecTest.java |  67 +--
 .../indexing/kinesis/KinesisSamplerSpecTest.java   |  67 +--
 .../indexing/overlord/sampler/FirehoseSampler.java |   4 +-
 .../overlord/sampler/FirehoseSamplerTest.java  | 474 +++--
 4 files changed, 514 insertions(+), 98 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



[incubator-druid] branch master updated (7250010 -> 0514e56)

2019-11-22 Thread jonwei
This is an automated email from the ASF dual-hosted git repository.

jonwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git.


from 7250010  add parquet support to native batch (#8883)
 add 0514e56  add TsvInputFormat (#8915)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/druid/data/input/InputFormat.java   |   4 +-
 .../apache/druid/data/input/impl/CSVParseSpec.java |   2 +-
 .../druid/data/input/impl/CsvInputFormat.java  | 119 +
 .../apache/druid/data/input/impl/CsvReader.java| 108 ++-
 ...utFormat.java => SeparateValueInputFormat.java} |  89 ++-
 .../{CsvReader.java => SeparateValueReader.java}   |  33 --
 ...ongDimensionSchema.java => TsvInputFormat.java} |  29 ++---
 .../apache/druid/data/input/impl/TsvReader.java}   |  40 +++
 .../druid/java/util/common/parsers/CSVParser.java  |   4 +-
 .../druid/data/input/impl/CsvInputFormatTest.java  |   4 +-
 .../druid/data/input/impl/CsvReaderTest.java   |  28 +++--
 .../input/impl/InputEntityIteratingReaderTest.java |   1 +
 .../input/impl/TimedShutoffInputSourceTest.java|   2 +-
 ...nputFormatTest.java => TsvInputFormatTest.java} |  12 +--
 .../{CsvReaderTest.java => TsvReaderTest.java} |  86 ---
 .../ParallelIndexSupervisorTaskSerdeTest.java  |   2 +-
 .../sampler/CsvInputSourceSamplerTest.java |   2 +-
 .../overlord/sampler/InputSourceSamplerTest.java   |   2 +-
 .../RecordSupplierInputSourceTest.java |   2 +-
 19 files changed, 225 insertions(+), 344 deletions(-)
 copy core/src/main/java/org/apache/druid/data/input/impl/{CsvInputFormat.java 
=> SeparateValueInputFormat.java} (67%)
 copy core/src/main/java/org/apache/druid/data/input/impl/{CsvReader.java => 
SeparateValueReader.java} (80%)
 copy 
core/src/main/java/org/apache/druid/data/input/impl/{LongDimensionSchema.java 
=> TsvInputFormat.java} (62%)
 copy core/src/{test/java/org/apache/druid/data/input/impl/NoopInputFormat.java 
=> main/java/org/apache/druid/data/input/impl/TsvReader.java} (62%)
 copy 
core/src/test/java/org/apache/druid/data/input/impl/{CsvInputFormatTest.java => 
TsvInputFormatTest.java} (80%)
 copy core/src/test/java/org/apache/druid/data/input/impl/{CsvReaderTest.java 
=> TsvReaderTest.java} (74%)


-
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org



  1   2   3   4   5   6   >