[ 
https://issues.apache.org/jira/browse/NIFI-1280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962811#comment-15962811
 ] 

ASF GitHub Bot commented on NIFI-1280:
--------------------------------------

Github user markap14 commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/1652#discussion_r110647404
  
    --- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryFlowFile.java
 ---
    @@ -0,0 +1,550 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.nifi.processors.standard;
    +
    +import java.io.Closeable;
    +import java.io.IOException;
    +import java.io.OutputStream;
    +import java.sql.Connection;
    +import java.sql.DriverManager;
    +import java.sql.PreparedStatement;
    +import java.sql.ResultSet;
    +import java.sql.SQLException;
    +import java.sql.Statement;
    +import java.util.ArrayList;
    +import java.util.Collection;
    +import java.util.Collections;
    +import java.util.HashMap;
    +import java.util.HashSet;
    +import java.util.List;
    +import java.util.Map;
    +import java.util.Properties;
    +import java.util.Set;
    +import java.util.concurrent.BlockingQueue;
    +import java.util.concurrent.LinkedBlockingQueue;
    +import java.util.concurrent.TimeUnit;
    +import java.util.concurrent.atomic.AtomicReference;
    +import java.util.function.Supplier;
    +
    +import org.apache.calcite.config.CalciteConnectionProperty;
    +import org.apache.calcite.config.Lex;
    +import org.apache.calcite.jdbc.CalciteConnection;
    +import org.apache.calcite.schema.SchemaPlus;
    +import org.apache.calcite.sql.parser.SqlParser;
    +import org.apache.nifi.annotation.behavior.DynamicProperty;
    +import org.apache.nifi.annotation.behavior.DynamicRelationship;
    +import org.apache.nifi.annotation.behavior.EventDriven;
    +import org.apache.nifi.annotation.behavior.InputRequirement;
    +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement;
    +import org.apache.nifi.annotation.behavior.SideEffectFree;
    +import org.apache.nifi.annotation.behavior.SupportsBatching;
    +import org.apache.nifi.annotation.documentation.CapabilityDescription;
    +import org.apache.nifi.annotation.documentation.Tags;
    +import org.apache.nifi.annotation.lifecycle.OnScheduled;
    +import org.apache.nifi.annotation.lifecycle.OnStopped;
    +import org.apache.nifi.components.PropertyDescriptor;
    +import org.apache.nifi.components.ValidationContext;
    +import org.apache.nifi.components.ValidationResult;
    +import org.apache.nifi.components.Validator;
    +import org.apache.nifi.flowfile.FlowFile;
    +import org.apache.nifi.flowfile.attributes.CoreAttributes;
    +import org.apache.nifi.processor.AbstractProcessor;
    +import org.apache.nifi.processor.ProcessContext;
    +import org.apache.nifi.processor.ProcessSession;
    +import org.apache.nifi.processor.ProcessorInitializationContext;
    +import org.apache.nifi.processor.Relationship;
    +import org.apache.nifi.processor.exception.ProcessException;
    +import org.apache.nifi.processor.io.OutputStreamCallback;
    +import org.apache.nifi.queryflowfile.FlowFileTable;
    +import org.apache.nifi.serialization.RecordSetWriter;
    +import org.apache.nifi.serialization.RecordSetWriterFactory;
    +import org.apache.nifi.serialization.RowRecordReaderFactory;
    +import org.apache.nifi.serialization.WriteResult;
    +import org.apache.nifi.serialization.record.ResultSetRecordSet;
    +import org.apache.nifi.util.StopWatch;
    +
    +@EventDriven
    +@SideEffectFree
    +@SupportsBatching
    +@Tags({"sql", "query", "calcite", "route", "record", "transform", 
"select", "update", "modify", "etl", "filter", "record", "csv", "json", "logs", 
"text", "avro", "aggregate"})
    +@InputRequirement(Requirement.INPUT_REQUIRED)
    +@CapabilityDescription("Evaluates one or more SQL queries against the 
contents of a FlowFile. The result of the "
    +    + "SQL query then becomes the content of the output FlowFile. This can 
be used, for example, "
    +    + "for field-specific filtering, transformation, and row-level 
filtering. "
    +    + "Columns can be renamed, simple calculations and aggregations 
performed, etc. "
    +    + "The Processor is configured with a Record Reader Controller Service 
and a Record Writer service so as to allow flexibility in incoming and outgoing 
data formats. "
    +    + "The Processor must be configured with at least one user-defined 
property. The name of the Property "
    +    + "is the Relationship to route data to, and the value of the Property 
is a SQL SELECT statement that is used to specify how input data should be 
transformed/filtered. "
    +    + "The SQL statement must be valid ANSI SQL and is powered by Apache 
Calcite. "
    +    + "If the transformation fails, the original FlowFile is routed to the 
'failure' relationship. Otherwise, the data selected will be routed to the 
associated "
    +    + "relationship. See the Processor Usage documentation for more 
information.")
    +@DynamicRelationship(name="<Property Name>", description="Each 
user-defined property defines a new Relationship for this Processor.")
    +@DynamicProperty(name = "The name of the relationship to route data to", 
value="A SQL SELECT statement that is used to determine what data should be 
routed to this "
    +        + "relationship.", supportsExpressionLanguage=true, 
description="Each user-defined property specifies a SQL SELECT statement to run 
over the data, with the data "
    +        + "that is selected being routed to the relationship whose name is 
the property name")
    +public class QueryFlowFile extends AbstractProcessor {
    +    static final PropertyDescriptor RECORD_READER_FACTORY = new 
PropertyDescriptor.Builder()
    +        .name("Record Reader")
    +        .description("Specifies the Controller Service to use for parsing 
incoming data and determining the data's schema")
    +        .identifiesControllerService(RowRecordReaderFactory.class)
    +        .required(true)
    +        .build();
    +    static final PropertyDescriptor RECORD_WRITER_FACTORY = new 
PropertyDescriptor.Builder()
    +        .name("Record Writer")
    +        .description("Specifies the Controller Service to use for writing 
results to a FlowFile")
    +        .identifiesControllerService(RecordSetWriterFactory.class)
    +        .required(true)
    +        .build();
    +    static final PropertyDescriptor INCLUDE_ZERO_RECORD_FLOWFILES = new 
PropertyDescriptor.Builder()
    +        .name("Include Zero Record FlowFiles")
    +        .description("When running the SQL statement against an incoming 
FlowFile, if the result has no data, "
    +            + "this property specifies whether or not a FlowFile will be 
sent to the corresponding relationship")
    +        .expressionLanguageSupported(false)
    +        .allowableValues("true", "false")
    +        .defaultValue("true")
    +        .required(true)
    +        .build();
    +    static final PropertyDescriptor CACHE_SCHEMA = new 
PropertyDescriptor.Builder()
    +        .name("Cache Schema")
    +        .description("Parsing the SQL query and deriving the FlowFile's 
schema is relatively expensive. If this value is set to true, "
    +            + "the Processor will cache these values so that the Processor 
is much more efficient and much faster. However, if this is done, "
    +            + "then the schema that is derived for the first FlowFile 
processed must apply to all FlowFiles. If all FlowFiles will not have the exact 
"
    +            + "same schema, or if the SQL SELECT statement uses the 
Expression Language, this value should be set to false.")
    +        .expressionLanguageSupported(false)
    +        .allowableValues("true", "false")
    +        .defaultValue("true")
    +        .required(true)
    +        .build();
    +
    +    public static final Relationship REL_ORIGINAL = new 
Relationship.Builder()
    +        .name("original")
    +        .description("The original FlowFile is routed to this 
relationship")
    +        .build();
    +    public static final Relationship REL_FAILURE = new 
Relationship.Builder()
    +        .name("failure")
    +        .description("If a FlowFile fails processing for any reason (for 
example, the SQL "
    +            + "statement contains columns not present in input data), the 
original FlowFile it will "
    +            + "be routed to this relationship")
    +        .build();
    +
    +    private List<PropertyDescriptor> properties;
    +    private final Set<Relationship> relationships = 
Collections.synchronizedSet(new HashSet<>());
    +
    +    private final Map<String, BlockingQueue<CachedStatement>> 
statementQueues = new HashMap<>();
    +
    +    @Override
    +    protected void init(final ProcessorInitializationContext context) {
    +        try {
    +            DriverManager.registerDriver(new 
org.apache.calcite.jdbc.Driver());
    +        } catch (final SQLException e) {
    +            throw new ProcessException("Failed to load Calcite JDBC 
Driver", e);
    +        }
    +
    +        final List<PropertyDescriptor> properties = new ArrayList<>();
    +        properties.add(RECORD_READER_FACTORY);
    +        properties.add(RECORD_WRITER_FACTORY);
    +        properties.add(INCLUDE_ZERO_RECORD_FLOWFILES);
    +        properties.add(CACHE_SCHEMA);
    +        this.properties = Collections.unmodifiableList(properties);
    +
    +        relationships.add(REL_FAILURE);
    +        relationships.add(REL_ORIGINAL);
    +    }
    --- End diff --
    
    This is done in the init() method so that it is done only once per 
Processor. It cannot be done statically because the relationships change 
depending on processor configuration (user-defined properties add new 
relationships).


> Create QueryFlowFile Processor
> ------------------------------
>
>                 Key: NIFI-1280
>                 URL: https://issues.apache.org/jira/browse/NIFI-1280
>             Project: Apache NiFi
>          Issue Type: Task
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Mark Payne
>             Fix For: 1.2.0
>
>         Attachments: QueryFlowFile_Record_Reader-Writer_Examples.xml
>
>
> We should have a Processor that allows users to easily filter out specific 
> columns from CSV data. For instance, a user would configure two different 
> properties: "Columns of Interest" (a comma-separated list of column indexes) 
> and "Filtering Strategy" (Keep Only These Columns, Remove Only These Columns).
> We can do this today with ReplaceText, but it is far more difficult than it 
> would be with this Processor, as the user has to use Regular Expressions, 
> etc. with ReplaceText.
> Eventually a Custom UI could even be built that allows a user to upload a 
> Sample CSV and choose which columns from there, similar to the way that Excel 
> works when importing CSV by dragging and selecting the desired columns? That 
> would certainly be a larger undertaking and would not need to be done for an 
> initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to