clintropolis commented on a change in pull request #11018:
URL: https://github.com/apache/druid/pull/11018#discussion_r603712536



##########
File path: 
extensions-core/protobuf-extensions/src/main/java/org/apache/druid/data/input/protobuf/ProtobufReader.java
##########
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.data.input.protobuf;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonNode;
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.collect.Iterators;
+import com.google.protobuf.DynamicMessage;
+import com.google.protobuf.InvalidProtocolBufferException;
+import com.google.protobuf.util.JsonFormat;
+import org.apache.commons.io.IOUtils;
+import org.apache.druid.data.input.InputEntity;
+import org.apache.druid.data.input.InputRow;
+import org.apache.druid.data.input.InputRowSchema;
+import org.apache.druid.data.input.IntermediateRowParsingReader;
+import org.apache.druid.data.input.impl.MapInputRowParser;
+import org.apache.druid.java.util.common.CloseableIterators;
+import org.apache.druid.java.util.common.parsers.CloseableIterator;
+import org.apache.druid.java.util.common.parsers.JSONFlattenerMaker;
+import org.apache.druid.java.util.common.parsers.JSONPathSpec;
+import org.apache.druid.java.util.common.parsers.ObjectFlattener;
+import org.apache.druid.java.util.common.parsers.ObjectFlatteners;
+import org.apache.druid.java.util.common.parsers.ParseException;
+import org.apache.druid.utils.CollectionUtils;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+public class ProtobufReader extends 
IntermediateRowParsingReader<DynamicMessage>
+{
+  private final InputRowSchema inputRowSchema;
+  private final InputEntity source;
+  private final JSONPathSpec flattenSpec;
+  private final ObjectFlattener<JsonNode> recordFlattener;
+  private final ProtobufBytesDecoder protobufBytesDecoder;
+
+  ProtobufReader(
+      InputRowSchema inputRowSchema,
+      InputEntity source,
+      ProtobufBytesDecoder protobufBytesDecoder,
+      JSONPathSpec flattenSpec
+  )
+  {
+    this.inputRowSchema = inputRowSchema;
+    this.source = source;
+    this.protobufBytesDecoder = protobufBytesDecoder;
+    this.flattenSpec = flattenSpec;
+    this.recordFlattener = ObjectFlatteners.create(flattenSpec, new 
JSONFlattenerMaker(true));
+  }
+
+  @Override
+  protected CloseableIterator<DynamicMessage> intermediateRowIterator() throws 
IOException
+  {
+    return CloseableIterators.withEmptyBaggage(
+        
Iterators.singletonIterator(protobufBytesDecoder.parse(ByteBuffer.wrap(IOUtils.toByteArray(source.open())
+        ))));

Review comment:
       nit: formatting is sort of funny here
   ```suggestion
       return CloseableIterators.withEmptyBaggage(
         
Iterators.singletonIterator(protobufBytesDecoder.parse(ByteBuffer.wrap(IOUtils.toByteArray(source.open()))))
       );
   ```

##########
File path: docs/ingestion/data-formats.md
##########
@@ -272,6 +272,42 @@ The `inputFormat` to load data of Avro OCF format. An 
example is:
 |schema| JSON Object |Define a reader schema to be used when parsing Avro 
records, this is useful when parsing multiple versions of Avro OCF file data | 
no (default will decode using the writer schema contained in the OCF file) |
 | binaryAsString | Boolean | Specifies if the bytes parquet column which is 
not logically marked as a string or enum type should be treated as a UTF-8 
encoded string. | no (default = false) |
 
+### Protobuf
+
+> You need to include the 
[`druid-protobuf-extensions`](../development/extensions-core/protobuf.md) as an 
extension to use the Protobuf input format.
+
+The `inputFormat` to load data of Protobuf format. An example is:
+```json
+"ioConfig": {
+  "inputFormat": {
+    "type": "protobuf",
+    "protoBytesDecoder": {
+      "type": "file",
+      "descriptor": "file:///tmp/metrics.desc",
+      "protoMessageType": "Metrics"
+    }
+    "flattenSpec": {
+      "useFieldDiscovery": true,
+      "fields": [
+        {
+          "type": "path",
+          "name": "someRecord_subInt",
+          "expr": "$.someRecord.subInt"
+        }
+      ]
+    },
+    "binaryAsString": false
+  },
+  ...
+}
+```
+
+| Field | Type | Description | Required |
+|-------|------|-------------|----------|
+|type| String| This should be set to `protobuf` to read Protobuf file| yes |

Review comment:
       Since I guess this is more stream oriented, maybe:
   ```suggestion
   |type| String| This should be set to `protobuf` to read Protobuf serialized 
data| yes |
   ```
   or "to read Protobuf formatted data" or similar.

##########
File path: docs/ingestion/data-formats.md
##########
@@ -272,6 +272,42 @@ The `inputFormat` to load data of Avro OCF format. An 
example is:
 |schema| JSON Object |Define a reader schema to be used when parsing Avro 
records, this is useful when parsing multiple versions of Avro OCF file data | 
no (default will decode using the writer schema contained in the OCF file) |
 | binaryAsString | Boolean | Specifies if the bytes parquet column which is 
not logically marked as a string or enum type should be treated as a UTF-8 
encoded string. | no (default = false) |
 
+### Protobuf
+
+> You need to include the 
[`druid-protobuf-extensions`](../development/extensions-core/protobuf.md) as an 
extension to use the Protobuf input format.
+
+The `inputFormat` to load data of Protobuf format. An example is:
+```json
+"ioConfig": {
+  "inputFormat": {
+    "type": "protobuf",
+    "protoBytesDecoder": {
+      "type": "file",
+      "descriptor": "file:///tmp/metrics.desc",
+      "protoMessageType": "Metrics"
+    }
+    "flattenSpec": {
+      "useFieldDiscovery": true,
+      "fields": [
+        {
+          "type": "path",
+          "name": "someRecord_subInt",
+          "expr": "$.someRecord.subInt"
+        }
+      ]
+    },
+    "binaryAsString": false

Review comment:
       I think this is an Avro/Parquet/ORC parameter

##########
File path: docs/ingestion/data-formats.md
##########
@@ -1104,6 +1142,83 @@ Sample spec:
 See the [extension description](../development/extensions-core/protobuf.md) for
 more details and examples.
 
+#### Protobuf Bytes Decoder
+
+If `type` is not included, the avroBytesDecoder defaults to `schema_registry`.

Review comment:
       ```suggestion
   If `type` is not included, the `protoBytesDecoder` defaults to 
`schema_registry`.
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to