This is an automated email from the ASF dual-hosted git repository.

victoria pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 26c5cac41a Fix a link problem (#13876)
26c5cac41a is described below

commit 26c5cac41af0fb755736e9d40390b59fc56c8fea
Author: Jill Osborne <[email protected]>
AuthorDate: Thu Mar 2 17:09:51 2023 +0000

    Fix a link problem (#13876)
---
 docs/ingestion/data-formats.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/ingestion/data-formats.md b/docs/ingestion/data-formats.md
index e65af91cd4..7bf50956a5 100644
--- a/docs/ingestion/data-formats.md
+++ b/docs/ingestion/data-formats.md
@@ -420,7 +420,7 @@ Note that it is essentially a map of integer schema ID to 
avro schema object. Th
 
 ##### SchemaRepo Based Avro Bytes Decoder
 
-This Avro bytes decoder first extracts `subject` and `id` from the input 
message bytes, and then uses them to look up the Avro schema used to decode the 
Avro record from bytes. For details, see the [schema 
repo](https://github.com/schema-repo/schema-repo) and 
[AVRO-1124](https://issues.apache.org/jira/browse/AVRO-1124). You will need an 
http service like schema repo to hold the avro schema. For information on 
registering a schema on the message producer side, see `org.apache.druid.data.i 
[...]
+This Avro bytes decoder first extracts `subject` and `id` from the input 
message bytes, and then uses them to look up the Avro schema used to decode the 
Avro record from bytes. For details, see the [schema 
repo](https://github.com/schema-repo/schema-repo). You need an HTTP service 
like schema repo to hold the Avro schema. For information on registering a 
schema on the message producer side, see 
`org.apache.druid.data.input.AvroStreamInputRowParserTest#testParse()`.
 
 | Field | Type | Description | Required |
 |-------|------|-------------|----------|
@@ -652,7 +652,7 @@ Each entry in the `fields` list can have the following 
components:
 - Enabling `useFieldDiscovery` will only automatically detect "simple" fields 
at the root level that correspond to data types that Druid supports. This 
includes strings, numbers, and lists of strings or numbers. Other types will 
not be automatically detected, and must be specified explicitly in the `fields` 
list.
 - Duplicate field `name`s are not allowed. An exception will be thrown.
 - If `useFieldDiscovery` is enabled, any discovered field with the same name 
as one already defined in the `fields` list will be skipped, rather than added 
twice.
-- [https://jsonpath.com/](https://jsonpath.com/) is useful for testing 
`path`-type expressions.
+- [JSONPath evaluator](https://jsonpath.com/) is useful for testing 
`path`-type expressions.
 - jackson-jq supports a subset of the full 
[jq](https://stedolan.github.io/jq/) syntax.  Please refer to the [jackson-jq 
documentation](https://github.com/eiiches/jackson-jq) for details.
 - [JsonPath](https://github.com/jayway/JsonPath) supports a bunch of 
functions, but not all of these functions are supported by Druid now. Following 
matrix shows the current supported JsonPath functions and corresponding data 
formats. Please also note the output data type of these functions.
   


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to