[
https://issues.apache.org/jira/browse/NIFI-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16655455#comment-16655455
]
ASF GitHub Bot commented on NIFI-5706:
--------------------------------------
Github user mattyb149 commented on the issue:
https://github.com/apache/nifi/pull/3079
What is the use case for having Parquet files flow through NiFi? I wrote
ConvertAvroToORC and every time I saw it used, it was immediately followed by
PutHDFS. So I followed the PutParquet lead and wrote a PutORC for the Hive 3
NAR. We can't currently do anything with a Parquet file (or ORC for that
matter) in NiFi, so just curious as to how you envision it being used.
Also I wonder if a ParquetRecordWriter might be a better idea? It would do
the same thing as a processor but the record processors can read in something
in any format, not just Avro. This was another impetus for having PutORC
instead of ConvertAvroToORC.
> Processor ConvertAvroToParquet
> -------------------------------
>
> Key: NIFI-5706
> URL: https://issues.apache.org/jira/browse/NIFI-5706
> Project: Apache NiFi
> Issue Type: New Feature
> Components: Extensions
> Affects Versions: 1.7.1
> Reporter: Mohit
> Priority: Major
> Labels: pull-request-available
>
> *Why*?
> PutParquet support is limited to HDFS.
> PutParquet bypasses the _flowfile_ implementation and writes the file
> directly to sink.
> We need a processor for parquet that works like _ConvertAvroToOrc_.
> *What*?
> _ConvertAvroToParquet_ will convert the incoming avro flowfile to a parquet
> flowfile. Unlike PutParquet, which writes to the hdfs file system, processor
> ConvertAvroToParquet would write into the flowfile, which can be pipelined to
> put into other sinks, like _local_, _S3, Azure data lake_ etc.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)