Hi Andrew,
We are receiving the golden gate transactions from Kafka which is received
in Nifi through consume kafka processor . Our data flow then reduces the
golden gate json message and sends the data to the target table in Hbase
using the PutHbase Json processor.
Thanks,
Faisal
On Mon, Jun
Hi Faisal,
There are various ways this can be handled. But this is going to depend on,
how are you receiving data from Oracle via Golden Gate. Are you using the
HBase Handler, the HDFS Handler, a Flat File, Kafka, or via another means?
Thanks,
Andrew
On Mon, Jun 11, 2018 at 9:43 AM Faisal
oh now i get it.. yes its a unique instance of the ConsumeKafka proc for
each topic!
Kindly let me know how do i increase the flow controller thread pool size
and timeout associated with any single kafka consumer?
On Mon, Jun 11, 2018 at 11:17 AM Joe Witt wrote:
> So you have a unique
So you have a unique instance of the ConsumeKafka proc for each topic rhen,
right?
Id increase the flow controller thread pool size by quite a bit as well.
On Sun, Jun 10, 2018, 10:13 PM Faisal Durrani wrote:
> Hi,
>
> Yes the kafka service is hosted on a single server while NIFI is on a
>
Hi,
Yes the kafka service is hosted on a single server while NIFI is on a
cluster of 4 servers. I'm not entirely sure what wild carding of topics is
but kafka is integrated with a Oracle golden gate and the topics are auto
generated as soon as a new table is created in Oracle.
"If you want that
Hello
Is this a single instance with wildcarding of topics? Please share config
details.
If you want that in a single instance you may need to alter the timeout
associated with any single kafka consumer. The assignment will be per
topic per partion. How many he threads for that processor?
Is there a recommended way to ensure the row counts form tables in source
(Oracle) are consistent with that of target tables in Hbase ( data-lake)?
.We are using Nifi which receives the golden gate messages and then by
using different processor we store the transactions in Hbase ,so
essentially
Does anyone know about this error from Kafka? I am using Nifi 1.5.0 with
ConsumerKafka processor.
ConsumeKafka[id=34753ed3-9dd6-15ed-9c91-147026236eee] Failed to retain
connection due to No current assignment for partition TEST_KAFKA_TOPIC:
This is the first time we are testing Nifi to consume