This is an automated email from the ASF dual-hosted git repository.
jihoonson pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-druid.git
The following commit(s) were added to refs/heads/master by this push:
new 849ba86 fix missing property in JsonTypeInfo of
SegmentWriteOutMediumFactory (#6656)
849ba86 is described below
commit 849ba867b209dabef67c46b590c36c6587e5ecb3
Author: Mingming Qiu <[email protected]>
AuthorDate: Wed Nov 28 07:59:58 2018 +0800
fix missing property in JsonTypeInfo of SegmentWriteOutMediumFactory (#6656)
---
docs/content/configuration/index.md | 2 +-
docs/content/development/extensions-core/kafka-ingestion.md | 8 +++++++-
docs/content/ingestion/native_tasks.md | 10 ++++++++--
docs/content/ingestion/stream-pull.md | 8 +++++++-
.../druid/segment/writeout/SegmentWriteOutMediumFactory.java | 2 +-
5 files changed, 24 insertions(+), 6 deletions(-)
diff --git a/docs/content/configuration/index.md
b/docs/content/configuration/index.md
index 5b7474d..8db69fe 100644
--- a/docs/content/configuration/index.md
+++ b/docs/content/configuration/index.md
@@ -1156,7 +1156,7 @@ then the value from the configuration below is used:
|Property|Description|Default|
|--------|-----------|-------|
-|`druid.peon.defaultSegmentWriteOutMediumFactory`|`tmpFile` or
`offHeapMemory`, see explanation above|`tmpFile`|
+|`druid.peon.defaultSegmentWriteOutMediumFactory.type`|`tmpFile` or
`offHeapMemory`, see explanation above|`tmpFile`|
## Broker
diff --git a/docs/content/development/extensions-core/kafka-ingestion.md
b/docs/content/development/extensions-core/kafka-ingestion.md
index b0cf967..ebecacf 100644
--- a/docs/content/development/extensions-core/kafka-ingestion.md
+++ b/docs/content/development/extensions-core/kafka-ingestion.md
@@ -150,7 +150,7 @@ The tuningConfig is optional and default parameters will be
used if no tuningCon
|`httpTimeout`|ISO8601 Period|How long to wait for a HTTP response from an
indexing task.|no (default == PT10S)|
|`shutdownTimeout`|ISO8601 Period|How long to wait for the supervisor to
attempt a graceful shutdown of tasks before exiting.|no (default == PT80S)|
|`offsetFetchPeriod`|ISO8601 Period|How often the supervisor queries Kafka and
the indexing tasks to fetch current offsets and calculate lag.|no (default ==
PT30S, min == PT5S)|
-|`segmentWriteOutMediumFactory`|String|Segment write-out medium to use when
creating segments. See [Additional Peon Configuration:
SegmentWriteOutMediumFactory](../../configuration/index.html#segmentwriteoutmediumfactory)
for explanation and available options.|no (not specified by default, the value
from `druid.peon.defaultSegmentWriteOutMediumFactory` is used)|
+|`segmentWriteOutMediumFactory`|Object|Segment write-out medium to use when
creating segments. See below for more information.|no (not specified by
default, the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type`
is used)|
|`intermediateHandoffPeriod`|ISO8601 Period|How often the tasks should hand
off segments. Handoff will happen either if `maxRowsPerSegment` or
`maxTotalRows` is hit or every `intermediateHandoffPeriod`, whichever happens
earlier.|no (default == P2147483647D)|
|`logParseExceptions`|Boolean|If true, log an error message when a parsing
exception occurs, containing information about the row where the error
occurred.|no, default == false|
|`maxParseExceptions`|Integer|The maximum number of parse exceptions that can
occur before the task halts ingestion and fails. Overridden if
`reportParseExceptions` is set.|no, unlimited default|
@@ -180,6 +180,12 @@ For Roaring bitmaps:
|`type`|String|Must be `roaring`.|yes|
|`compressRunOnSerialization`|Boolean|Use a run-length encoding where it is
estimated as more space efficient.|no (default == `true`)|
+#### SegmentWriteOutMediumFactory
+
+|Field|Type|Description|Required|
+|-----|----|-----------|--------|
+|`type`|String|See [Additional Peon Configuration:
SegmentWriteOutMediumFactory](../../configuration/index.html#segmentwriteoutmediumfactory)
for explanation and available options.|yes|
+
### KafkaSupervisorIOConfig
|Field|Type|Description|Required|
diff --git a/docs/content/ingestion/native_tasks.md
b/docs/content/ingestion/native_tasks.md
index 369f76d..4857255 100644
--- a/docs/content/ingestion/native_tasks.md
+++ b/docs/content/ingestion/native_tasks.md
@@ -160,7 +160,7 @@ The tuningConfig is optional and default parameters will be
used if no tuningCon
|forceExtendableShardSpecs|Forces use of extendable shardSpecs. Experimental
feature intended for use with the [Kafka indexing service
extension](../development/extensions-core/kafka-ingestion.html).|false|no|
|reportParseExceptions|If true, exceptions encountered during parsing will be
thrown and will halt ingestion; if false, unparseable rows and fields will be
skipped.|false|no|
|pushTimeout|Milliseconds to wait for pushing segments. It must be >= 0, where
0 means to wait forever.|0|no|
-|segmentWriteOutMediumFactory|Segment write-out medium to use when creating
segments. See [Additional Peon Configuration:
SegmentWriteOutMediumFactory](../configuration/index.html#segmentwriteoutmediumfactory)
for explanation and available options.|Not specified, the value from
`druid.peon.defaultSegmentWriteOutMediumFactory` is used|no|
+|segmentWriteOutMediumFactory|Segment write-out medium to use when creating
segments. See
[SegmentWriteOutMediumFactory](#segmentWriteOutMediumFactory).|Not specified,
the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type` is used|no|
|maxNumSubTasks|Maximum number of tasks which can be run at the same
time.|Integer.MAX_VALUE|no|
|maxRetry|Maximum number of retries on task failures.|3|no|
|taskStatusCheckPeriodMs|Polling period in milleseconds to check running task
statuses.|1000|no|
@@ -501,7 +501,7 @@ The tuningConfig is optional and default parameters will be
used if no tuningCon
|forceGuaranteedRollup|Forces guaranteeing the [perfect
rollup](../ingestion/index.html#roll-up-modes). The perfect rollup optimizes
the total size of generated segments and querying time while indexing time will
be increased. This flag cannot be used with either `appendToExisting` of
IOConfig or `forceExtendableShardSpecs`. For more details, see the below
__Segment pushing modes__ section.|false|no|
|reportParseExceptions|DEPRECATED. If true, exceptions encountered during
parsing will be thrown and will halt ingestion; if false, unparseable rows and
fields will be skipped. Setting `reportParseExceptions` to true will override
existing configurations for `maxParseExceptions` and `maxSavedParseExceptions`,
setting `maxParseExceptions` to 0 and limiting `maxSavedParseExceptions` to no
more than 1.|false|no|
|pushTimeout|Milliseconds to wait for pushing segments. It must be >= 0, where
0 means to wait forever.|0|no|
-|segmentWriteOutMediumFactory|Segment write-out medium to use when creating
segments. See [Additional Peon Configuration:
SegmentWriteOutMediumFactory](../configuration/index.html#segmentwriteoutmediumfactory)
for explanation and available options.|Not specified, the value from
`druid.peon.defaultSegmentWriteOutMediumFactory` is used|no|
+|segmentWriteOutMediumFactory|Segment write-out medium to use when creating
segments. See
[SegmentWriteOutMediumFactory](#segmentWriteOutMediumFactory).|Not specified,
the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type` is used|no|
|logParseExceptions|If true, log an error message when a parsing exception
occurs, containing information about the row where the error occurred.|false|no|
|maxParseExceptions|The maximum number of parse exceptions that can occur
before the task halts ingestion and fails. Overridden if
`reportParseExceptions` is set.|unlimited|no|
|maxSavedParseExceptions|When a parse exception occurs, Druid can keep track
of the most recent parse exceptions. "maxSavedParseExceptions" limits how many
exception instances will be saved. These saved exceptions will be made
available after the task finishes in the [task completion
report](../ingestion/reports.html). Overridden if `reportParseExceptions` is
set.|0|no|
@@ -533,6 +533,12 @@ For Roaring bitmaps:
|type|String|Must be `roaring`.|yes|
|compressRunOnSerialization|Boolean|Use a run-length encoding where it is
estimated as more space efficient.|no (default == `true`)|
+#### SegmentWriteOutMediumFactory
+
+|Field|Type|Description|Required|
+|-----|----|-----------|--------|
+|type|String|See [Additional Peon Configuration:
SegmentWriteOutMediumFactory](../configuration/index.html#segmentwriteoutmediumfactory)
for explanation and available options.|yes|
+
#### Segment pushing modes
While ingesting data using the Index task, it creates segments from the input
data and pushes them. For segment pushing,
diff --git a/docs/content/ingestion/stream-pull.md
b/docs/content/ingestion/stream-pull.md
index db3a76a..08f0293 100644
--- a/docs/content/ingestion/stream-pull.md
+++ b/docs/content/ingestion/stream-pull.md
@@ -178,7 +178,7 @@ The tuningConfig is optional and default parameters will be
used if no tuningCon
|reportParseExceptions|Boolean|If true, exceptions encountered during parsing
will be thrown and will halt ingestion. If false, unparseable rows and fields
will be skipped. If an entire row is skipped, the "unparseable" counter will be
incremented. If some fields in a row were parseable and some were not, the
parseable fields will be indexed and the "unparseable" counter will not be
incremented.|no (default == false)|
|handoffConditionTimeout|long|Milliseconds to wait for segment handoff. It
must be >= 0, where 0 means to wait forever.|no (default == 0)|
|alertTimeout|long|Milliseconds timeout after which an alert is created if the
task isn't finished by then. This allows users to monitor tasks that are
failing to finish and give up the worker slot for any unexpected errors.|no
(default == 0)|
-|segmentWriteOutMediumFactory|String|Segment write-out medium to use when
creating segments. See [Indexing Service
Configuration](../configuration/indexing-service.html) page,
"SegmentWriteOutMediumFactory" section for explanation and available
options.|no (not specified by default, the value from
`druid.peon.defaultSegmentWriteOutMediumFactory` is used)|
+|segmentWriteOutMediumFactory|Object|Segment write-out medium to use when
creating segments. See below for more information.|no (not specified by
default, the value from `druid.peon.defaultSegmentWriteOutMediumFactory.type`
is used)|
|dedupColumn|String|the column to judge whether this row is already in this
segment, if so, throw away this row. If it is String type column, to reduce
heap cost, use long type hashcode of this column's value to judge whether this
row is already ingested, so there maybe very small chance to throw away a row
that is not ingested before.|no (default == null)|
|indexSpec|Object|Tune how data is indexed. See below for more information.|no|
@@ -192,6 +192,12 @@ The following policies are available:
* `messageTime` – Can be used for non-"current time" as long as that
data is relatively in sequence. Events are rejected if they are less than
`windowPeriod` from the event with the latest timestamp. Hand off only occurs
if an event is seen after the segmentGranularity and `windowPeriod` (hand off
will not periodically occur unless you have a constant stream of data).
* `none` – All events are accepted. Never hands off data unless
shutdown() is called on the configured firehose.
+#### SegmentWriteOutMediumFactory
+
+|Field|Type|Description|Required|
+|-----|----|-----------|--------|
+|type|String|See [Additional Peon Configuration:
SegmentWriteOutMediumFactory](../configuration/index.html#segmentwriteoutmediumfactory)
for explanation and available options.|yes|
+
#### IndexSpec
|Field|Type|Description|Required|
diff --git
a/processing/src/main/java/org/apache/druid/segment/writeout/SegmentWriteOutMediumFactory.java
b/processing/src/main/java/org/apache/druid/segment/writeout/SegmentWriteOutMediumFactory.java
index be55b4f..bf4afef 100644
---
a/processing/src/main/java/org/apache/druid/segment/writeout/SegmentWriteOutMediumFactory.java
+++
b/processing/src/main/java/org/apache/druid/segment/writeout/SegmentWriteOutMediumFactory.java
@@ -27,7 +27,7 @@ import java.io.File;
import java.io.IOException;
import java.util.Set;
-@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, defaultImpl =
TmpFileSegmentWriteOutMediumFactory.class)
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", defaultImpl =
TmpFileSegmentWriteOutMediumFactory.class)
@JsonSubTypes(value = {
@JsonSubTypes.Type(name = "tmpFile", value =
TmpFileSegmentWriteOutMediumFactory.class),
@JsonSubTypes.Type(name = "offHeapMemory", value =
OffHeapMemorySegmentWriteOutMediumFactory.class),
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]