I think option 2 is the best way to handle this.
Technology naturally changes over time and some components of Nifi might not
make the most sense to keep around in the main line for the masses anymore.
However I really like still having them there for people to very simply add if
they so
James
Some are definitely less fun than others with Hive being the most notable.
I should rephrase my vendor thing on point one: It is as far as I know all
vendor supported Hadoop components. Whether NiFi is or not is a different
point.
Option 2 is the most realistic I suspect but still want
I'm a Hadoop and Nifi user without vendor support so unsurprisingly aren't
keen on #1, but then relying on community support and development is always
going to be a risk for us. If it came to it, we'd probably stop using Nifi
rather than pay a vendor which would be a real shame.
Are certain
I am wondering if the standard Nifi jdbc/odbc processors with some basic
testing with the common drivers like Simba etc. Hive drivers can help to
alleviate the issue without having separate HiveQL processors.
GC
From: Bryan Bende
Sent: Friday, March 24, 2023
I lean towards option 2 with the caveat that maybe we don't have to
retain every Hadoop related component when creating this separate set
of components. Mainly I'm thinking that Hive has been the most
problematic to maintain so maybe that is dropped all together. I think
it would be unfortunate to
As one of the small number of people that fight the battle, I like the
idea of Option 1 (full disclosure: I work for a vendor). From a
community standpoint (I'm on the PMC) I'm not strongly opposed to
Option 2 although I wouldn't want to be the one managing and releasing
the artifacts :) Having
Team,
For the full time NiFi has been in Apache we've built with support for
various Hadoop ecosystem components like HDFS, Hive, HBase, others,
and more recently formats/serialization modes like necessary for
Parquet, Orc, Iceberg, etc..
All of these things however present endless challenges