James,
Try using SLF4J to get the logger if you aren't. Like this:
def log = LoggerFactory.getLogger(this.getClass())
https://www.slf4j.org/apidocs/org/slf4j/LoggerFactory.html
On Tue, Feb 26, 2019 at 4:11 PM Matt Burgess wrote:
> Oh right, not sure why I thought Jim was using ExecuteGroovySc
sor.
> Which is funny because we just created ours to work with Hive on CDH.
>
> Kudu based lookup also sounds great - we love Kudu and started using it
> recently for real-time replication of Oracle databases into our cluster.
>
> Boris
>
>
>
> On Fri, Feb 22, 2019
@Boris
Mark's approach will work for a lot of scenarios. I've used it extensively
with different clients.
On Fri, Feb 22, 2019 at 1:10 PM Mark Payne wrote:
> This is certainly a better route to go than my previous suggestion :) Have
> one flow that grabs one of the datasets and stores it somewh
R that is not included in the distro but is available
> for download. Just saying an ExtensionRegistry would make that easier
> :)
>
> Regards,
> Matt
>
> [1]
> https://repository.apache.org/content/repositories/releases/org/apache/nifi/nifi-hive3-nar/1.9.0/
>
> On Wed, Fe
t; published anyway (even if not included in the distro), [1] is an
>> example of a NAR that is not included in the distro but is available
>> for download. Just saying an ExtensionRegistry would make that easier
>> :)
>>
>> Regards,
>> Matt
>>
>> [1
gt;
> Hope this info helps, and if you want I can keep posted the results of
> this last two topics.
>
> Regards,
>
> LC
>
>
>
> --
> *De: *"Mike Thomsen"
> *Para: *"users"
> *Enviados: *Miércoles, 20 de Febrero 2
/nifi/pull/3299
>
> Cheers,
> Joe
>
> On Wed, Feb 20, 2019 at 8:08 AM Mike Thomsen
> wrote:
>
>> I'm looking for feedback from ElasticSearch users on how they use and how
>> they **want** to use ElasticSearch v5 and newer with NiFi.
>>
>> So please
I think the SQL processors other than PutDatabaseRecord also support
"upsert" functionality, so that might also help.
On Wed, Feb 20, 2019 at 8:33 AM Mike Thomsen wrote:
> The easiest way to do this would be to create a UNIQUE constraint on the
> project name and just send one
The easiest way to do this would be to create a UNIQUE constraint on the
project name and just send one insert at a time. Then each individual
failed insert will get routed to failure.
For the sake of safety here, if you have multiple flows that feed into a
common SQL ingest point, you might want
I'm looking for feedback from ElasticSearch users on how they use and how
they **want** to use ElasticSearch v5 and newer with NiFi.
So please respond with some use cases and what you want, what frustrates
you, etc. so I can prioritize Jira tickets for the ElasticSearch REST API
bundle.
(Note: ba
I would like to mark the v5 Elastic bundle as deprecated in 1.10. Per
Elastic's official guidelines, the transport API--which it uses--is
deprecated in Elastic 7 and to be removed from at least public
accessibility in Elastic 8.
https://www.elastic.co/guide/en/elasticsearch/client/java-api/master/
s. A fair request, IMO.
>
> Supporting a full EL for the keystore/truststore path is a bad idea, no
> doubt.
>
> Do you agree?
>
> Andrew
>
> On Tue, Feb 19, 2019, 3:33 AM Mike Thomsen wrote:
>
>> When expression language is not supported by a field, it won't acc
When expression language is not supported by a field, it won't accept any
variables.
Mike
On Mon, Feb 18, 2019 at 10:34 PM Beutel, Maximilian <
maximilian.beu...@credit-suisse.com> wrote:
> Hello!
>
>
>
> Also asked the question on IRC, but figured the mailing list might be
> better for this lon
st and port
> used?
>
> BR,
> Tom
>
> On Mon, 18 Feb 2019, 23:56 Mike Thomsen
>> Tom,
>>
>> > Note: both Registry and Nifi are in Docker containers on the same node.
>> I tried with IP address, but nothing.
>>
>> Each docker container has its
Tom,
> Note: both Registry and Nifi are in Docker containers on the same node. I
tried with IP address, but nothing.
Each docker container has its own IP address. You need to link the two
containers. I always use Docker Compose, so I can't help you on how to set
it up manually. That said, I did a
him a little while to join and announce he's ready to go over it
before I move forward with a discussion on this.
On Sat, Feb 9, 2019 at 12:34 PM Mike Thomsen wrote:
> PR if anyone is interested:
>
> https://github.com/apache/nifi/pull/3298
>
> On Fri, Feb 8, 2019 at 5:34
> There is nothing "distributed" about them.
That is not true of at least the HBase distributed map cache client. Never
used CouchDB, but I believe that is clusterable too.
On Thu, Feb 14, 2019 at 8:15 AM Boris Tyukin wrote:
> I am not NiFi dev, but personally, after looking at DistributedCache
Anyone tried to connect NiFi to something that is API-compatible with S3
like Minio, SWIFT or Ceph?
Thanks,
Mike
Ali,
There is a site to site publishing task for provenance that you can add as
a root controller service that would be great here. It'll just take all of
your provenance data periodically and ship it off to another NiFi server or
cluster that can process all of the provenance data as blocks of JS
he Writer?
> What do you have set as the Avro Writer's Schema Access Strategy? What
> version of NiFi are you running?
>
> Thanks
> -Mark
>
>
> > On Feb 13, 2019, at 9:55 AM, Mike Thomsen
> wrote:
> >
> > I have a pretty simple statement like this:
>
I have a pretty simple statement like this:
SELECT * FROM FLOWFILE WHERE action = 'X'
We have a long field that is nullable ( ["null", "long"] so we're clear)
and QueryRecord throws an exception saying that it couldn't handle a null
in that field.
NullPointerException: null of long in field [na
Not at the moment, that could be a useful improvement.
On Tue, Feb 12, 2019 at 3:11 PM Shawn Weeks
wrote:
> With the NiFi TestRunner class for a Processor, is there a way to have it
> write the output stream of the processor to disk so that it’s not trying to
> store the thing in a ByteArrayOutp
PR if anyone is interested:
https://github.com/apache/nifi/pull/3298
On Fri, Feb 8, 2019 at 5:34 PM Mike Thomsen wrote:
> With Redis and HBase you can set a TTL on the data itself in the lookup
> table. Were you thinking something more than that?
>
> On Fri, Feb 8, 2019 at 4:
ecords.
>
> Andrew
>
> On Fri, Feb 8, 2019, 8:22 AM Mike Thomsen wrote:
>
>> Thanks. That answers it succinctly for me. I'll build out a
>> DetectDuplicateRecord processor to handle this.
>>
>> On Fri, Feb 8, 2019 at 11:17 AM Mark Payne wrote:
>>
>>
t all columns and there
> > are lots of them.
> >
> > Alternatively you could try PartitionRecord -> QueryRecord (select *
> > limit 1). Neither PartitionRecord nor QueryRecord keeps state so you'd
> > likely need to use distributed cache or UpdateAttribute.
>
Do we have anything like DetectDuplicate for the Record API already? Didn't
see anything, but wanted to ask before reinventing the wheel.
Thanks,
Mike
Josef,
In addition to what Bryan said, if the code is from or related to work at
your employer, make sure that you have management approval to make sure
everyone's covered. If not, just go ahead and submit.
Thanks,
Mike
On Wed, Feb 6, 2019 at 9:47 AM wrote:
> Perfect, thank you Bryan.
>
> Che
the GetMongo stuck and nothing appears in bulletin.
>
>
>
> For example.
>
> Query={“id”:123}
>
> The above is valid query, however, MongoDB don’t have anything associated
> with this, so we get the empty response.
>
>
>
>
>
>
>
> Regards,
>
&
with some Error message.
>
>
>
>1. Invalid Query [Existing issue we discussed]: We tried the below
>solution you have provided and it is working fine. However, do we need to
>raise the JIRA, to get it fixed by the way of pulling it off from the
>attribute?
>
I agree, and filed a Jira ticket for it:
https://issues.apache.org/jira/browse/NIFI-5995
FWIW, I've used Groovy for this sort of thing a lot. You really can't go
wrong with that choice.
On Thu, Jan 31, 2019 at 4:14 AM happy smith wrote:
> Thanks a lot for the quick answer. Even if I am not fami
each flow file has had its SQL updated to UPSERT,
> they could have been executed as a single transaction instead of
> having to go through individually. That's why the UPSERT capability
> for PDR would be so handy, and now that you've reminded me, I should
> probably get back t
The syntax for Phoenix that I see all over the place is an upsert. How do I
specify that with PutDatabaseRecord?
Thanks,
Mike
Can you elaborate on the first one? The way I understood it was:
1. Set up a client service.
2. Give it an invalid URL
3. Enable.
4. Processor should now accept queries.
Wasn't able to get anything other than an invalid processor when I tried
that.
On Wed, Jan 30, 2019 at 9:25 AM Mike Th
ome Error message.
>
>
>
>1. Invalid Query [Existing issue we discussed]: We tried the below
>solution you have provided and it is working fine. However, do we need to
>raise the JIRA, to get it fixed by the way of pulling it off from the
>attribute?
>
>
&g
3:40 PM Mike Thomsen wrote:
> I just passed a query through the flowfile body with the value {"input":}
> and it routed to failure without incident. Anything else about your
> environment you can share?
>
> On Mon, Jan 28, 2019 at 1:55 AM Dnyaneshwar Pawar <
> dny
readPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
>
> We have also enclosed screen shots for NiFi flow, Ge
first two routed to success, the last one routed to failure both on
1.8.0 and 1.9.0-SNAPSHOT.
Can you share your input?
Thanks,
Mike
On Fri, Jan 25, 2019 at 10:53 AM Mike Thomsen
wrote:
> Ok, so it's a current issue. I'll look into it.
>
> On Fri, Jan 25, 2019 at 12:16
Ok, so it's a current issue. I'll look into it.
On Fri, Jan 25, 2019 at 12:16 AM Dnyaneshwar Pawar <
dnyaneshwar_pa...@persistent.com> wrote:
> Mike,
>
>
>
>We are using MongoDB 3.4.7 and Apache NiFi 1.8.0
>
>
>
> Regards,
>
> Dnyaneshwa
What version are you using?
On Thu, Jan 24, 2019 at 8:23 AM Dnyaneshwar Pawar <
dnyaneshwar_pa...@persistent.com> wrote:
> Hi All,
>
> We are working with MongoDB integration with NiFi (Version 1.8). And we
> observed one issue related to failure case. Whenever, GetMongo processes
> the incorrec
This is something that we could write a Jira ticket against, but as it is
hard-coded for now, I don't think there's a work around.
On Thu, Jan 17, 2019 at 6:57 AM Dnyaneshwar Pawar <
dnyaneshwar_pa...@persistent.com> wrote:
> …just found out that Jetty is not loading appropriate keystore provider
There's no need to use a self-signed certificate. You have two options for
creating good certificates with a CA (albeit not an enterprise one) without
much trouble:
1.
https://www.elastic.co/guide/en/elasticsearch/reference/current/certgen.html
2. NiFi TLS Toolkit
Both of those will give you a va
rote:
>
>> We used the AvroSchemaRegistry
>>
>> Dano
>>
>> On Tue, Jan 15, 2019, 12:51 PM Mike Thomsen > wrote:
>>
>>> What schema registry are others using in production use cases? We tried
>>> out the HortonWorks registry, but it seemed to s
What schema registry are others using in production use cases? We tried out
the HortonWorks registry, but it seemed to stop accepting updates once we
hit "v3" of our schema (we didn't name it v3, that's the version it showed
in the UI). So I'd like to know what others are doing for their registry
u
PasLe,
Take a look at the processors we have already for SQL-centric ETL and other
aspects of ETL. One of the advantages of doing that with NiFi instead of
Spark is that you have the ability to view the progress as you are
building, and I think you'll find that it makes debugging your workflow a
l
Rolled off a team a few months ago that was using 1.7.1 with Docker Swarm.
Don't remember the exact commands they were using, but they did user some
sort of docker-compose setup.
On Thu, Jan 3, 2019 at 3:06 PM Kifle, Dawit *
wrote:
> Hello,
>
>
>
> I am having a problem running NiFi 1.8.0 image
James,
Only skimmed this, but this looks like it might provide some interesting
ideas on how to transition from Protobuf to Avro:
https://gist.github.com/alexvictoor/1d3937f502c60318071f
Mike
On Tue, Dec 18, 2018 at 3:07 PM Otto Fowler wrote:
> What would be really cool would be if you could
After a little digging, it appears that the JDBC spec hasn't caught up to
the inclusion of JSON as a data type. Might be the result of only a few
newer databases supporting it and JSONB as native data types. Regardless,
it looks like something we can probably implement for 1.9. According to the
exa
h memory mapped io as well not being able to let go of files until
> restart.
>
> On Wed, Dec 12, 2018, 7:19 PM Mike Thomsen
>> Mike,
>>
>> I did lsof +L1 and saw a ton of files listed that were marked (deleted),
>> but the OS was hanging onto them. They were all temp
an old JVM bug that causes File#delete to not work when a stream is
still open. So it looks like it might be some bug deeper in the API that
I'm using.
On Wed, Dec 12, 2018 at 4:50 PM Mike Thomsen wrote:
> Thanks, but what I cannot figure out is why du -h is reporting that the
&
che.org/jira/browse/NIFI-4287
>
> Regards
>
>
> On Wed, Dec 12, 2018 at 1:07 PM Mike Thomsen
> wrote:
>
>> I configured the content repository to append 10 files per flowfile
>> because I'm dealing with a lot of decompressing and recompressing of small
>&
I configured the content repository to append 10 files per flowfile because
I'm dealing with a lot of decompressing and recompressing of small files.
The content repo goes up and down appropriately as the content claims are
removed, but I noticed that after a lot of heavy processing sometimes
20-25
first):
>
> [
> {
> "operation": "shift",
> "spec": {
> "*": "user.&"
> }
> }
> ]
>
> Regards,
> Matt
>
> On Mon, Dec 3, 2018 at 11:37 AM Mike Thomsen
> wrote:
> >
> >
ec should work with JoltTransformRecord
> (you can go directly from CSV to nested JSON without having to convert
> first):
>
> [
> {
> "operation": "shift",
> "spec": {
> "*": "user.&"
> }
&
Looks like I missed this:
https://community.hortonworks.com/questions/212877/want-to-convert-csv-to-nested-json-using-nifi.html
I'll get crackin on Jolt since that seems to be the best answer at the
moment.
On Mon, Dec 3, 2018 at 11:24 AM Mike Thomsen wrote:
> We have a need to be
We have a need to be able to take a CSV file and convert it into a nested
JSON structure. I did a simple test with GenerateFlowFile and a few
ConvertRecord processors. Test was:
GenerateFlowFile (JSON) -> ConvertRecord (JSON in, CSV out) ->
ConvertRecord (CSV in, JSON out) and it threw an exceptio
LookupAttribute also seems like it could be another avenue, but it
> doesn't have the MongoDBLookupService in the list of Compatible Controller
> Services.
>
> Ryan
>
> On Fri, Nov 30, 2018 at 10:24 AM Mike Thomsen
> wrote:
>
>> LookupAttribute + the MongoDBLooku
LookupAttribute + the MongoDBLookupService should be able to do that.
On Thu, Nov 29, 2018 at 8:05 PM Otto Fowler wrote:
> Sounds like you want to look at enrichment with the LookupRecord
> processors and Mongo.
>
> https://community.hortonworks.com/articles/146198/data-flow-enrichment-with-nifi
t; processor was started with the
> offset at 'earliest'?
>
> Thanks
> -Mark
>
> On Nov 13, 2018, at 12:54 PM, Mike Thomsen wrote:
>
> That would appear to be the case. So here's what I was doing:
>
> 1. Used this sort of code to se
Mike - so does this mean the parse.failure relationship wasn't working
> though? We should try to dig into this more if you're up for it or
> sharing more details. We dont want the behavior you ran into for
> sure...
> On Tue, Nov 13, 2018 at 12:49 PM Mike Thomsen
> wrote:
o pick out a schema name to use as Merge strategy.
> > ConsumeKafka -> (ConvertRecord) -> Merge Content
> >
> > /Viking
> >
> > From: Mike Thomsen
> > Sent: Tuesday, November 13, 2018 3:02 PM
> > To: users@nifi.apache.org
>
(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
On Tue, Nov 13, 2018 at 10:00 AM Pierre Villard
wrote:
> Hey Mike,
>
> Anything in the logs?
>
> Pierre
>
> Le mar. 13 nov. 2018 à 15:56, Mike Thomsen a
> écrit :
>
>> I have an odd situation where I have Co
I have an odd situation where I have ConsumeKafkaRecord and ConsumeKafka
pulling from the same topic under different consumer groups, but only the
latter will pull new events. I ran into a situation where the reader didn't
like the Avro data being pulled from the queue and so I created new topics
a
Ameer,
Depending on how you implemented the custom framework, you may be able to
easily drop it in place into a custom NiFi processor. Without knowing much
about your implementation details, if you can act on Java streams, Strings,
byte arrays and things like that it will probably be very straight
As a backup to that, you can also write a Groovy script for ExecuteScript
that uses stax to iterate over the XML data. It won't care about schemas
(Avro or XML) and stuff like that; just check for basic validity.
On Fri, Oct 26, 2018 at 11:42 AM Joe Witt wrote:
> Cant your logic detect the stran
I worked on a team that was packaging NiFi for distribution for people to
use the flow as a service, and what they did to make it easy was export the
flow.xml.gz file and add it to a custom Docker image. That way it became
essentially a lift-and-shift operation. Once you do something like that,
you
Guillaume,
We also have a patch coming in 1.8 that exposes the clustering settings
through Docker, so that should make it a lot easier for you to set up a
test cluster.
On Fri, Oct 19, 2018 at 3:49 AM Asanka Sanjaya wrote:
> Hi Guillaume,
> I'm using nifi in our production kubernetes cluster on
raneous to NiFi, but does this mean that we need install a
> cert into ZooKeeper? Right now, both apps are running on the same box.
>
>
>
> Thank you.
>
>
>
> *From:* Mike Thomsen
> *Sent:* Monday, October 15, 2018 9:02 AM
> *To:* users@nifi.apache.org
> *Subjec
point me to instructions how to configure a cluster
> with an external instance of ZooKeeper? The NiFi Admin Guide talks
> exclusively about the embedded one.
>
>
>
> Thanks again.
>
>
>
> *From:* Mike Thomsen
> *Sent:* Friday, October 12, 2018 10:17 AM
> *To:*
tor-Framework-0]
> o.a.c.f.state.ConnectionStateManager State change: SUSPENDED
>
> 2018-10-12 08:21:42,092 INFO [Curator-ConnectionStateManager-0]
> o.a.n.c.l.e.CuratorLeaderElectionManager
> org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager$ElectionListener@1790
Another thing is that you can also have the process add a period to the
start of the filename to hide them until they're done being written and can
be renamed, if you want to be extra safe.
On Fri, Oct 12, 2018 at 10:01 AM Aldrin Piri wrote:
> Hi Tom,
>
> You can make use of the minimum file age
Also, in a production environment NiFi should have its own dedicated
ZooKeeper cluster to be on the safe side. You should not reuse ZooKeeper
quora (ex. have HBase and NiFi point to the same quorum).
On Fri, Oct 12, 2018 at 8:29 AM Mike Thomsen wrote:
> Alexander,
>
> I am pretty
Alexander,
I am pretty sure your problem is here:
*nifi.state.management.embedded.zookeeper.start=true*
That spins up an embedded ZooKeeper, which is generally intended to be used
for local development. For example, HBase provides the same feature, but it
is intended to allow you to test a real H
ing Solr container is on remote
> server.
> I don't understand how can I access Solr collection on docker host address
> and Solr port :
> from Python script using pysolr, but from Nifi I can't do that.
>
> BR,
> Tom
>
> On Thu, 27 Sep 2018 at 14:28, Mike Thomse
I think I've run into similar problems with SolrCloud in the past w/
Docker. SolrCloud stores the IP address it binds to in ZooKeeper, which is
why you see the Docker internal IP address there and not localhost:8983
since presumably you're using localhost: as the Solr Location. I
think you can forc
hostname: nifi
Under the nifi declaration
On Mon, Sep 24, 2018 at 11:07 AM Juan Pablo Gardella <
gardellajuanpa...@gmail.com> wrote:
> How?
>
> On Mon, 24 Sep 2018 at 11:31 David Gallagher
> wrote:
>
>> Hi – not sure if it helps, but you can set a static hostname in your
>> docker-compose.
>>
>>
Did you change their names when you created your new versions?
On Fri, Sep 14, 2018 at 6:59 AM Dominique De Vito
wrote:
> Hi,
>
> I have defined 5 new (custom) processors (derivated respectively from
> existing ones - ConsumeKafka, GetSFTP, GetFile, ListenTCP an ListFile) into
> 2 NAR (1 NAR with
github.com/apache/nifi/blob/master/nifi-api/src/main/java/org/apache/nifi/registry/flow/VersionedFlowState.java
>
>
>
> *From: *Mike Thomsen
> *Reply-To: *"users@nifi.apache.org"
> *Date: *Friday, August 24, 2018 at 10:55
> *To: *"users@nifi.apache.org"
>
It probably has to do with the database the registry used not being
migrated to your new environment. Try setting up a snapshot of that too.
On Sat, Sep 8, 2018 at 1:06 PM David Gallagher
wrote:
> Hi – I have a dev registry server (1.7.1) set up with git, and am trying
> to use git bundle to move
Brandon,
What processor do you use it for in that capacity? If it's an ElasticSearch
one we can look into ways to bring this functionality into that bundle so
Andy can refactor.
Thanks,
Mike
On Wed, Sep 5, 2018 at 12:07 PM Brandon DeVries wrote:
> Andy,
>
> We use it pretty much how Joe is...
> How can I parse it to name/value pairs in groovy script?
I would recommend getting the Groovy binary distribution (we use 2.4.X) and
experimenting with that. Aside from us throwing in a few of the NiFi APIs,
it's a standard Groovy environment. You'll flatten the learning curve on
writing these s
Pushed a PR for this.
On Thu, Aug 23, 2018 at 9:23 PM Mike Thomsen wrote:
> Ryan,
>
> Didn't see you put out a pull request in the last two weeks. Let me know
> if you're actively working this, if not I can do my own patch.
>
> Thanks,
>
> Mike
>
> On
to help dig into this with you more so we can
> file the appropriate JIRA to get it fixed so it doesn’t require special
> handling.
>
>
>
> Thanks,
> Kevin
>
> On Fri, Aug 24, 2018 at 9:23 AM Mike Thomsen
> wrote:
>
>> We have a custom processor that got a new p
We have a custom processor that got a new property. It's part of an
embedded PG and the parent PG is version controlled. It detects the new
property at that level, but won't let us "commit local changes." So we're
stuck unable to commit. Has anyone seen this before or have any ideas on
what is happ
Ryan,
Didn't see you put out a pull request in the last two weeks. Let me know if
you're actively working this, if not I can do my own patch.
Thanks,
Mike
On Wed, Aug 8, 2018 at 6:03 AM Mike Thomsen wrote:
> Yes. Use a custom validator.
>
> On Tue, Aug 7, 2018 at 1:58
but unfortunately, I've already
> committed to Windows.
> What about a script? Is there some tool you know of that can just be
> called by NiFi to convert an input CSV file to a Parquet file?
>
> On Wed, Aug 15, 2018 at 8:32 AM, Mike Thomsen
> wrote:
>
>> Scott,
>>
&
Scott,
You can also try Docker on Windows. Something like this should work:
docker run -d --name nifi-test -v C:/nifi_temp:/opt/data_output -p
8080:8080 apache/nifi:latest
I don't have Windows either, but Docker seems to work fine for my
colleagues that have to use it on Windows. That should bri
d in how the output is formatted, it could be
>> harder
>> >>> >>> to maintain (bugs to be fixed in two places, e.g.). I think we
>> should
>> >>> >>> add an optional RecordWriter property to ExecuteSQL, and the
>> >>> >
> wrote:
>
>> Haha, yea I'm testing out some code-hacking here to see what I can do
>> too. If it works, I can try to submit a Pull Request for it. I haven't
>> done one before, and this seems pretty easy to make it configurable.
>>
>> Ryan
>>
>
Add, I should have read your comment on the Jira ticket because it's
even easier than that!
On Tue, Aug 7, 2018 at 1:47 PM Mike Thomsen wrote:
> I just checked the code, and it's using the default Jackson mapping
> behavior for that. The Mongo driver returns a Date, and lo
I just checked the code, and it's using the default Jackson mapping
behavior for that. The Mongo driver returns a Date, and looks like Jackson
is just turning that into an ISO8601 string without that level of
precision. A custom mapper for Date objects should be able to solve that.
I'll work it whe
f created in the other
>> instance.
>>
>> Generally it is either a single shared registry, or multiple
>> registries each with their own back-end storage mechanisms and then
>> you can use the CLI to promote flows between the registry instances.
>>
>>
Has anyone tried having two or more registry instances pointing to the same
repo and keeping them in sync?
We have a NiFi deployment where it would be an easier sell to have 3
instances of the registry sharing the same repo than to have one instance
that is a big exception to the network security
My guess is that it is due to the fact that Avro is the only record type
that can match sql pretty closely feature to feature on data types.
On Tue, Aug 7, 2018 at 8:33 AM Boris Tyukin wrote:
> I've been wondering since I started learning NiFi why ExecuteSQL processor
> only returns AVRO formatte
est
cycle.
Mike
On Mon, Jul 30, 2018 at 4:19 PM Michael Moser wrote:
> Hey Mike,
>
> As long as it's a controller service PropertyDescriptor that uses
> dynamicallyModifiesClasspath, check out the JMSConnectionFactoryProvider in
> the nifi-jms-bundle.
>
> -- Mike
>
t;-- looks in the FlowFile Content
>> for those operators
>>
>>In my above email, I'm suggesting an alternative way it could be done
>> with just NiFi Processor Properties.
>>
>> Thanks for looking into it,
>> Ryan
>>
>>
>> On Mo
Is there a good example somewhere that shows how to use
dynamicallyModifiesClasspath on the PropertyDescriptor and use it add new
JARs that are available to the controller service?
Thanks,
Mike
scussion) going soon...
>
> Regards,
> Matt
>
> [1] https://issues.apache.org/jira/browse/NIFI-5420
> [2] https://issues.apache.org/jira/browse/NIFI-5463
> On Wed, Jul 25, 2018 at 9:18 PM Mike Thomsen
> wrote:
> >
> > Ryan,
> >
> > Understandable. We
Ryan,
Understandable. We haven't found a need for Beats or Forwarders here either
because S2S gives everything you need to reliably ship the data.
FWIW, if your need changes, I would recommend stripping down the provenance
data. We cut out about 66-75% of the fields and dropped the intermediate
r
I have a client with a similar use case. They wanted to be able to figure
out when they processed a particular data set (they're using batch
processing with NiFi). The solution I gave them was based on using Metrics
and Provenance reporting to the ELK stack. I know that doesn't directly
answer your
8.0-SNAPSHOT-dockermaven)...
> ERROR: manifest for apache/nifi:1.8.0-SNAPSHOT-dockermaven not found
>
> Regards,
>
> Chris
>
>
>
> On Fri, Jul 20, 2018 at 1:10 PM Mike Thomsen
> wrote:
>
>> Cluster support is only in 1.8.
>> On Fri, Jul 20, 2018 at 7:02 A
201 - 300 of 457 matches
Mail list logo