[GitHub] nifi issue #2743: NIFI-5226: Implement a Record API based PutInfluxDB proces...
Github user timhallinflux commented on the issue: https://github.com/apache/nifi/pull/2743 @MikeThomsen -- any update here? ---
[GitHub] nifi issue #2743: NIFI-5226: Implement a Record API based PutInfluxDB proces...
Github user timhallinflux commented on the issue: https://github.com/apache/nifi/pull/2743 PutInfluxDB was created as single purpose tool -- accepting line protocol only. It does not read CSVs, AVROs, JSONs out of the box. If developer wants to use it with NiFi, s/he has to extend it and write their own parsers. That is ok...and it does support a direct integration. However, PutInfluxDBRecord addresses the problem of reading the data. It simplifies data parsing and handling using the concept of Records ( https://blogs.apache.org/nifi/entry/record-oriented-data-with-nifi ) It more naturally works with Nifi objects and the fields, tags are configurable from within the Nifi IDE...which delivers a much more integrated experience. For example: Reading Twitter with PutInfluxDB is not possible without coding/external configuration. On the contrary, reading Twitter JSON via PutInfluxDBRecord is no-dev effort and leverages the tools within NiFi itself. Still, the two classes can coexist depending on the type of work that needs to be done. I'm in favor of moving ahead with the PutInfluxDBRecord ---
[GitHub] nifi pull request #2666: NIFI-5130 ExecuteInfluxDBQuery processor chunking s...
Github user timhallinflux commented on a diff in the pull request: https://github.com/apache/nifi/pull/2666#discussion_r186204325 --- Diff: nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/ExecuteInfluxDBQuery.java --- @@ -86,6 +93,18 @@ .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .build(); +public static final PropertyDescriptor INFLUX_DB_QUERY_CHUNK_SIZE = new PropertyDescriptor.Builder() +.name("influxdb-query-chunk-size") +.displayName("Results chunk size") +.description("Chunking can be used to return results in a stream of smaller batches " ++ "(each has a partial results up to a chunk size) rather than as a single response. " ++ "Chunking queries can return an unlimited number of rows. Note: Chunking is enable when result chunk size is greater than 0") + .defaultValue(String.valueOf(DEFAULT_INFLUX_RESPONSE_CHUNK_SIZE)) + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) +.addValidator(StandardValidators.createLongValidator(0, Integer.MAX_VALUE, true)) --- End diff -- Aligning with the default value seems very rational. ---
[GitHub] nifi issue #2562: NIFI-4927 - InfluxDB Query Processor
Github user timhallinflux commented on the issue: https://github.com/apache/nifi/pull/2562 Yep. Working on a blog post to highlight both parts...writer and reader. U is not really possible. For D, both DELETE and/or DROP statements are allowed through InfluxQL. ---
[GitHub] nifi issue #2562: NIFI-4927 - InfluxDB Query Processor
Github user timhallinflux commented on the issue: https://github.com/apache/nifi/pull/2562 This is great! Thank you Mike and Mans. Mans if you reach out to me at tim at influxdata dot com we would be happy to send you some stickers and a hoodie in appreciation for your efforts here. ---
[GitHub] nifi issue #2562: NIFI-4927 - InfluxDB Query Processor
Github user timhallinflux commented on the issue: https://github.com/apache/nifi/pull/2562 @joewitt and @MikeThomsen -- you can also just quickly spin up the TICK stack via the sandbox. Have a look here... https://github.com/influxdata/sandbox If @joewitt is no longer technical enough to run this, I'll be more than happy to do a screen share and help him out! :-) Thank you @mans2singh for contributing this! ---