[ 
https://issues.apache.org/jira/browse/NIFI-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15975628#comment-15975628
 ] 

ASF GitHub Bot commented on NIFI-3682:
--------------------------------------

Github user joewitt commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/1682#discussion_r112322877
  
    --- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/grok/GrokReader.java
 ---
    @@ -23,38 +23,58 @@
     import java.io.Reader;
     import java.util.ArrayList;
     import java.util.List;
    +import java.util.Map;
    +import java.util.regex.Matcher;
     
     import org.apache.nifi.annotation.documentation.CapabilityDescription;
     import org.apache.nifi.annotation.documentation.Tags;
     import org.apache.nifi.annotation.lifecycle.OnEnabled;
    +import org.apache.nifi.components.AllowableValue;
     import org.apache.nifi.components.PropertyDescriptor;
     import org.apache.nifi.controller.ConfigurationContext;
     import org.apache.nifi.flowfile.FlowFile;
     import org.apache.nifi.logging.ComponentLog;
     import org.apache.nifi.processor.util.StandardValidators;
    +import org.apache.nifi.schema.access.SchemaAccessStrategy;
    +import org.apache.nifi.schema.access.SchemaNotFoundException;
    +import org.apache.nifi.schemaregistry.services.SchemaRegistry;
     import org.apache.nifi.serialization.RecordReader;
    -import org.apache.nifi.serialization.RowRecordReaderFactory;
    -import org.apache.nifi.serialization.SchemaRegistryRecordReader;
    +import org.apache.nifi.serialization.RecordReaderFactory;
    +import org.apache.nifi.serialization.SchemaRegistryService;
    +import org.apache.nifi.serialization.SimpleRecordSchema;
    +import org.apache.nifi.serialization.record.DataType;
    +import org.apache.nifi.serialization.record.RecordField;
    +import org.apache.nifi.serialization.record.RecordFieldType;
     import org.apache.nifi.serialization.record.RecordSchema;
     
     import io.thekraken.grok.api.Grok;
    +import io.thekraken.grok.api.GrokUtils;
     import io.thekraken.grok.api.exception.GrokException;
     
     @Tags({"grok", "logs", "logfiles", "parse", "unstructured", "text", 
"record", "reader", "regex", "pattern", "logstash"})
     @CapabilityDescription("Provides a mechanism for reading unstructured text 
data, such as log files, and structuring the data "
         + "so that it can be processed. The service is configured using Grok 
patterns. "
         + "The service reads from a stream of data and splits each message 
that it finds into a separate Record, each containing the fields that are 
configured. "
    -    + "If a line in the input does not match the expected message pattern, 
the line of text is considered to be part of the previous "
    -    + "message, with the exception of stack traces. A stack trace that is 
found at the end of a log message is considered to be part "
    -    + "of the previous message but is added to the 'STACK_TRACE' field of 
the Record. If a record has no stack trace, it will have a NULL value "
    -    + "for the STACK_TRACE field. All fields that are parsed are 
considered to be of type String by default. If there is need to change the type 
of a field, "
    -    + "this can be accomplished by configuring the Schema Registry to use 
and adding the appropriate schema.")
    -public class GrokReader extends SchemaRegistryRecordReader implements 
RowRecordReaderFactory {
    +    + "If a line in the input does not match the expected message pattern, 
the line of text is either considered to be part of the previous "
    +    + "message or is skipped, depending on the configuration,, with the 
exception of stack traces. A stack trace that is found at the end of "
    +    + "a log message is considered to be part of the previous message but 
is added to the 'stackTrace' field of the Record. If a record has "
    +    + "no stack trace, it will have a NULL value for the stackTrace field. 
All fields that are parsed are considered to be of type String by default. "
    --- End diff --
    
    Does the stack trace reference here make sense for general Grok 
consumption?  I mean, isn't that just a function of a given log file and 
whether or not to capture it depends on a given grok expression?  Or is this a 
first class concept that we should be talking about here?


> Add "Schema Access Strategy" to Record Readers and Writers
> ----------------------------------------------------------
>
>                 Key: NIFI-3682
>                 URL: https://issues.apache.org/jira/browse/NIFI-3682
>             Project: Apache NiFi
>          Issue Type: Improvement
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>             Fix For: 1.2.0
>
>
> Currently the record readers are mostly configured with a Schema Registry 
> service and the name of the schema. We should instead allow user to choose 
> one of several strategies for determining the schema: Schema Registry + 
> schema.name attribute, Schema Registry + identifier and version embedded at 
> start of record/stream, avro.schema attribute, embedded schema for cases like 
> Avro where the schema can be embedded in the content itself.
> On the writer side, we should also expose these options in order to convey 
> the schema information to others.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to