;
>>
>>
>> Which version of NiFi would work best for RDBMS processors, I’ll check with
>> platform folks if we can go version upgrade.
>>
>>
>>
>> Thanks, Appreciate your help
>>
>>
>>
>> From: Matt Burgess [mailto:m
f NiFi would work best for RDBMS processors, I’ll check
> with platform folks if we can go version upgrade.
>
>
>
> Thanks, Appreciate your help
>
>
>
> *From:* Matt Burgess [mailto:mattyb...@gmail.com]
> *Sent:* Friday, September 29, 2017 6:48 PM
> *To:* users@
Burgess [mailto:mattyb...@gmail.com]
Sent: Friday, September 29, 2017 6:48 PM
To: users@nifi.apache.org
Subject: Re: ExecuteSQL question: how do I stop long running queries
Vikram,
I'm not at my computer right now so I'm shooting from the hip, but depending on
how complex your query is (mea
Vikram,
I'm not at my computer right now so I'm shooting from the hip, but depending on
how complex your query is (meaning if it is very simple), take a look at
QueryDatabaseTable and GenerateTableFetch, if you are looking to get all rows
(versus incremental fetching), you can omit the maximum
Hi ,
I am using ExecuteSQL processor to pull from operational database and for some
of the tables it keeps running for more than 24 hrs,
1] During a long-running query from a database (e.g. Oracle) being execute by
an 'ExecuteSql' process, is there a way to check the progress - say by seeing
f
And final update…
It turns out that ExecuteSql doesn’t distribute processing across the nodes in
the cluster – not unsurprising really when I think about it, so refinement to
the below was to just have the text file on one node so that it effectively
only runs on that node. Of course I could hav
Thanks for the input on this.
As a follow-up this is what I have done…
Created a text file which contains single value of the lowest ID of the data I
am retrieving, save on each node of cluster.
Then
GetFile (to read file)
ExtractText (to get the value – and set to attribute)
UpdateAttribute (to
Hi,
I have exactly the same use case to periodically get rows from some
security appliances with just a read only access.
Currently (without NiFi), we use an SQL query to track the maximum
value, depending on the DB/appliance/vendor, it could be a simple
"SELECT getdate()" or "select max(SW_TIM
For that approach I would think either the MapCache or the File would work. The
trick will be getting the max value out of the flow file. After
QueryDatabaseTable you could split the Avro and convert to JSON (or vice
versa), then update the MapCache or File. I'm not sure the order of records is
Hi,
Thanks for this.
I did think about a MV but unfortunately I haven’t access to create views –
just read access. That would have been my simplest option ;-) Life’s never that
easy though is it?
The only part of the sql I need to be dynamic is the date parameter (I could
even use the id column
Conrad,
Is it possible to add a view (materialized or not) to the RDBMS? That
view could take care of the denormalization and then
QueryDatabaseTable could point at the view. The DB would take care of
the push-down filters, which functionally is like if you had a
QueryDatabaseTable for each table
Hi,
My use case is that I want to ship a load of rows from an RDMS periodically and
put in HDFS as Avro.
QueryTable processor has functionality that would be great i.e. maxcolumn value
(there are couple of columns I could use for this from the data) and it is this
functionality I am looking for,
12 matches
Mail list logo