[ 
https://issues.apache.org/jira/browse/NIFI-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16656333#comment-16656333
 ] 

ASF GitHub Bot commented on NIFI-5706:
--------------------------------------

Github user mohitgargk commented on the issue:

    https://github.com/apache/nifi/pull/3079
  
    > What is the use case for having Parquet files flow through NiFi?
    I have a use case where I am writing the parquet files into data lake (in 
azure) and also need to write them on a network storage device. The intent here 
is keep the parquet in flowfile, so that it can flow into any sink later on. If 
we don't have Convert*ToParquet (or similar processors), we would require a hop 
(an i/o endpoint) that may not give good performance. 
    



> Processor ConvertAvroToParquet 
> -------------------------------
>
>                 Key: NIFI-5706
>                 URL: https://issues.apache.org/jira/browse/NIFI-5706
>             Project: Apache NiFi
>          Issue Type: New Feature
>          Components: Extensions
>    Affects Versions: 1.7.1
>            Reporter: Mohit
>            Priority: Major
>              Labels: pull-request-available
>
> *Why*?
> PutParquet support is limited to HDFS. 
> PutParquet bypasses the _flowfile_ implementation and writes the file 
> directly to sink. 
> We need a processor for parquet that works like _ConvertAvroToOrc_.
> *What*?
> _ConvertAvroToParquet_ will convert the incoming avro flowfile to a parquet 
> flowfile. Unlike PutParquet, which writes to the hdfs file system, processor 
> ConvertAvroToParquet would write into the flowfile, which can be pipelined to 
> put into other sinks, like _local_, _S3, Azure data lake_ etc.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to