[GitHub] [nifi-minifi-cpp] james94 commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
james94 commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r432259370 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} + +std::string getScriptFullPath(const std::string& script_file_name) { + return SCRIPT_FILES_DIRECTORY + utils::file::FileUtils::get_separator() + script_file_name; +} + +const std::string TEST_FILE_NAME{ "test_file.txt" }; +const std::string TEST_FILE_CONTENT{ "Test text\n" }; +const std::string SCRIPT_FILES_DIRECTORY{ "test_scripts" }; + +std::unique_ptr testController_; +std::shared_ptr plan_; +LogTestController& logTestController_; +std::shared_ptr logger_; +}; + +class SimplePythonFlowFileTransferTest : public ExecutePythonProcessorTestBase { + public: + enum class Expectation { +OUTPUT_FILE_MATCHES_INPUT, +RUNTIME_RELATIONSHIP_EXCEPTION, +PROCESSOR_INITIALIZATION_EXCEPTION + }; + SimplePythonFlowFileTransferTest() : ExecutePythonProcessorTestBase{} {} + + protected: + void testSimpleFilePassthrough(const Expectation expectation, const core::Relationship& execute_python_out_conn, const std::string& used_as_script_file, const std::string& used_as_script_body) { +reInitialize(); +const std::string input_dir = createTempDirWithFile(TEST_FILE_NAME, TEST_FILE_CONTENT); +const std::string output_dir = createTempDir(); + +addGetFileProcessorToPlan(input_dir); +if (Expectation::PROCESSOR_INITIALIZATION_EXCEPTION == expectation) { + REQUIRE_THROWS(addExecutePythonProcessorToPlan(used_as_script_file, used_as_script_body)); + return; +} +
[GitHub] [nifi-minifi-cpp] james94 commented on pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library
james94 commented on pull request #781: URL: https://github.com/apache/nifi-minifi-cpp/pull/781#issuecomment-635631793 @szaszm and @phrocker I will let you know as soon as I have some test scripts for these h2o processors. I will follow up soon on them. Thanks for your guidance. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] sjyang18 opened a new pull request #4304: NIFI-7473: adding SupportsBatching annotation to Azure Blob processors
sjyang18 opened a new pull request #4304: URL: https://github.com/apache/nifi/pull/4304 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [X] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [X] Have you verified that the full build is successful on JDK 8? - [X] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-6721) jms_expiration attribute problem
[ https://issues.apache.org/jira/browse/NIFI-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119088#comment-17119088 ] Seokwon Yang commented on NIFI-6721: [~tchermak] I have a patch for this work. [https://github.com/apache/nifi/pull/4270] Would you take a look at and test out? > jms_expiration attribute problem > > > Key: NIFI-6721 > URL: https://issues.apache.org/jira/browse/NIFI-6721 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 > Environment: Linux CENTOS 7 >Reporter: Tim Chermak >Assignee: Seokwon Yang >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > The documentation for PublishJMS indicates the JMSExpiration is set with the > attribute jms_expiration. However, this value is really the time-to-live > (ttl) in milliseconds. The JMSExpiration is calculated by the provider > library as "expiration = timestamp + ttl" > So, this NiFi flowfile attribute should really be named jms_ttl. The current > setup works correctly when NiFi creates and publishes a message, but has > problems when you try to republish a JMS message. > GetFile -> UpdateAttibute -> PublishJMS creates a valid JMSExpiration in the > message, however, when a JMS has the expiration set, ConsumeJMS -> PublishJMS > shows an error in the nifi.--app.log file: > "o.apache.nifi.jms.processors.PublishJMS PublishJMS[id=016b1005-xx...] > Incompatible value for attribute jms_expiration [1566428032803] is not a > number. Ignoring this attribute." > Looks like ConsumeJMS set the flowfile attribute to the expiration value > rather than the time-ti-live value. Time-to-live should be jms_ttl = > expiration - current_time. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] sjyang18 commented on a change in pull request #4265: NIFI-7434: Endpoint suffix property in AzureStorageAccount NIFI processors
sjyang18 commented on a change in pull request #4265: URL: https://github.com/apache/nifi/pull/4265#discussion_r432129688 ## File path: nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/AzureStorageUtils.java ## @@ -85,6 +86,20 @@ .sensitive(true) .build(); +public static final PropertyDescriptor ENDPOINT_SUFFIX = new PropertyDescriptor.Builder() +.name("storage-endpoint-suffix") +.displayName("Common Storage Account Endpoint Suffix") +.description( +"Storage accounts in public Azure always use a common FQDN suffix. " + +"Override this endpoint suffix with a different suffix in certain circumsances (like Azure Stack or non-public Azure regions). " + +"The preferred way is to configure them through a controller service specified in the Storage Credentials property. " + +"The controller service can provide a common/shared configuration for multiple/all Azure processors. Furthermore, the credentials " + +"can also be looked up dynamically with the 'Lookup' version of the service.") + .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) Review comment: Yes. Option 1 is implemented. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] phrocker commented on pull request #781: MINIFICPP-1214: Converts H2O Processors to use ALv2 compliant H20-3 library
phrocker commented on pull request #781: URL: https://github.com/apache/nifi-minifi-cpp/pull/781#issuecomment-635605071 Thanks @szaszm . I don't have my keys so I'll merge with a signed commit tomorrow. I'll take another quick pass through and respond with some ideas on the follow on ticket. thanks to you and @james94 ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #3734: NIFI-6666 Add Useragent Header to InvokeHTTP requests
MikeThomsen commented on pull request #3734: URL: https://github.com/apache/nifi/pull/3734#issuecomment-635556294 @nielsbasjes thanks. I'll try to get back to the review tonight. Overall LGTM now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on a change in pull request #4173: NIFI-7299 Add basic OAuth2 token provider controller service
MikeThomsen commented on a change in pull request #4173: URL: https://github.com/apache/nifi/pull/4173#discussion_r432075546 ## File path: nifi-nar-bundles/nifi-standard-services/nifi-oauth2-provider-bundle/nifi-oauth2-provider-service/src/main/java/org/apache/nifi/oauth2/Util.java ## @@ -0,0 +1,122 @@ +package org.apache.nifi.oauth2; + +import com.fasterxml.jackson.databind.ObjectMapper; +import okhttp3.OkHttpClient; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.ssl.SSLContextService; +import org.apache.nifi.util.StringUtils; + +import javax.net.ssl.*; +import java.io.FileInputStream; +import java.io.IOException; +import java.security.*; +import java.security.cert.CertificateException; +import java.util.Map; + +public class Util { +private static final ObjectMapper MAPPER = new ObjectMapper(); + +/** + * This code as taken from the InvokeHttp processor from Apache NiFi 1.10-SNAPSHOT found here: + * + * https://github.com/apache/nifi/blob/1cadc79ad50cf569ee107eaeeb95dc216ea2/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/InvokeHTTP.java + * @param okHttpClientBuilder + * @param sslService + * @param sslContext + * @param setAsSocketFactory + * @throws IOException + * @throws KeyStoreException + * @throws CertificateException + * @throws NoSuchAlgorithmException + * @throws UnrecoverableKeyException + * @throws KeyManagementException + */ +public static void setSslSocketFactory(OkHttpClient.Builder okHttpClientBuilder, SSLContextService sslService, SSLContext sslContext, boolean setAsSocketFactory) +throws IOException, KeyStoreException, CertificateException, NoSuchAlgorithmException, UnrecoverableKeyException, KeyManagementException { + +final KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()); +final TrustManagerFactory trustManagerFactory = TrustManagerFactory.getInstance("X509"); +// initialize the KeyManager array to null and we will overwrite later if a keystore is loaded +KeyManager[] keyManagers = null; + +// we will only initialize the keystore if properties have been supplied by the SSLContextService +if (sslService.isKeyStoreConfigured()) { +final String keystoreLocation = sslService.getKeyStoreFile(); +final String keystorePass = sslService.getKeyStorePassword(); +final String keystoreType = sslService.getKeyStoreType(); + +// prepare the keystore +final KeyStore keyStore = KeyStore.getInstance(keystoreType); + +try (FileInputStream keyStoreStream = new FileInputStream(keystoreLocation)) { +keyStore.load(keyStoreStream, keystorePass.toCharArray()); +} + +keyManagerFactory.init(keyStore, keystorePass.toCharArray()); +keyManagers = keyManagerFactory.getKeyManagers(); +} + +// we will only initialize the truststure if properties have been supplied by the SSLContextService +if (sslService.isTrustStoreConfigured()) { +// load truststore +final String truststoreLocation = sslService.getTrustStoreFile(); +final String truststorePass = sslService.getTrustStorePassword(); +final String truststoreType = sslService.getTrustStoreType(); + +KeyStore truststore = KeyStore.getInstance(truststoreType); +truststore.load(new FileInputStream(truststoreLocation), truststorePass.toCharArray()); +trustManagerFactory.init(truststore); +} + + /* +TrustManagerFactory.getTrustManagers returns a trust manager for each type of trust material. Since we are getting a trust manager factory that uses "X509" +as it's trust management algorithm, we are able to grab the first (and thus the most preferred) and use it as our x509 Trust Manager + https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/TrustManagerFactory.html#getTrustManagers-- + */ +final X509TrustManager x509TrustManager; +TrustManager[] trustManagers = trustManagerFactory.getTrustManagers(); +if (trustManagers[0] != null) { +x509TrustManager = (X509TrustManager) trustManagers[0]; +} else { +throw new IllegalStateException("List of trust managers is null"); +} + +// if keystore properties were not supplied, the keyManagers array will be null +sslContext.init(keyManagers, trustManagerFactory.getTrustManagers(), null); + +final SSLSocketFactory sslSocketFactory = sslContext.getSocketFactory(); +okHttpClientBuilder.sslSocketFactory(sslSocketFactory, x509TrustManager); +if (setAsSocketFactory) { +
[GitHub] [nifi] MikeThomsen commented on pull request #4173: NIFI-7299 Add basic OAuth2 token provider controller service
MikeThomsen commented on pull request #4173: URL: https://github.com/apache/nifi/pull/4173#issuecomment-63211 @jdye64 going to try to write some tests tonight that use `TestServer` to show that the REST calls work. That's one thing we didn't have to write since we had keycloak for direct testing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7495) PutParquet processor generating invalid files - Can not read value at 0 in block -1 in file - Encoding DELTA_BINARY_PACKED is only supported for type INT32
Henrique Neves do Nascimento created NIFI-7495: -- Summary: PutParquet processor generating invalid files - Can not read value at 0 in block -1 in file - Encoding DELTA_BINARY_PACKED is only supported for type INT32 Key: NIFI-7495 URL: https://issues.apache.org/jira/browse/NIFI-7495 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.11.4 Environment: Red Hat Enterprise Linux Server 7.6 (Maipo) Reporter: Henrique Neves do Nascimento Attachments: HIVE-stacktrace.txt PutParquet processor generates invalid parquet files when the flow file has few records. When the flow file has only header + 1-3 records, the PutParquet succeeds, the file is written in HDFS, but it is invalid. But when the flow file has a lot of records, the PutParquet processor also succeeds and it is possible to read the generated files. I tried to open the invalid parquet files using parquet-tools, hive and pyspark, and all of them fails with the same error: “Can not read value at 0 in block -1 in file”. Hive also shows me this error in the log file: Caused by: parquet.io.ParquetDecodingException: Encoding DELTA_BINARY_PACKED is only supported for type INT32. To reproduce the problem, i used a GetFile processor + PutParquet writing in HDFS, NIFI version 1.11.4 Here is an example of the content of a file that is created, but invalid (i changed some chars): timestamp,ggsn,apn,msisdn,statustype,ip,sessionid,duration 1589236199000,186.4.75.1,webapn.company.com,44895956521,Start,177945774,979cdf6b021ed038,-1, And an example of a success case: timestamp,ggsn,apn,msisdn,statustype,ip,sessionid,duration 158956920,186.6.64.1,webapn.company.com,12395856026,Start,176224166,989dhe2808a0e10c,-1, 158956920,186.6.96.1,webapn.company.com,12393446203,Stop,177119485,989dhe6904515cf7,3712000, 158956920,186.6.0.3,webapn.company.com,12394359006,Stop,-1407442482,989dhe0f010282f1,7092000, 158956920,186.6.96.1,webapn.company.com,12394427751,Start,177550761,989dhe6904dd35df,-1, 158956920,186.6.64.1,webapn.company.com,12393309416,Start,176616344,989dhe2703f93f8a,-1, 158956920,186.6.0.3,webapn.company.com,12394355488,Start,176177290,989dhe10505a9af1,-1, 158956920,186.6.64.1,webapn.company.com,12395478656,Start,176688933,989dhe2703f93f8b,-1, 158956920,186.6.96.1,webapn.company.com,12395214244,Start,172288204,989dhe6900c48aa7,-1, 158956920,186.6.64.1,webapn.company.com,12393418526,Stop,176335286,989dhe27081d0fa1,5, 158956920,186.6.96.1,webapn.company.com,12394828264,Start,177952229,989dhe6900c48aa8,-1, 158956920,152.146.0.1,webapn.company.com,12394416031,Stop,-1405606344,989dhe49ccja1399,58000, 158956920,186.6.96.1,webapn.company.com,12394589217,Start,177743029,989dhe6a04ee2123,-1, 158956920,152.146.0.1,webapn.company.com,12394859666,Start,-1407233995,989dhe4916be3ee9,-1, 158956920,152.146.0.1,webapn.company.com,12393735602,Stop,-1407845029,c83b809dde72f30a,402000, My PutParquet is configured to write files UNCOMPRESSED, version PARQUET_2_0, and TRUE for avro configs. He is also using a CSVReader as record reader, with this schema: { "namespace": "nifi", "name": "logs_radius", "type": "record", "fields": [ \{ "name": "timestamp", "type": "long" }, \{ "name": "ggsn", "type": "string" }, \{ "name": "apn", "type": "string" }, \{ "name": "msisdn", "type": "string" }, \{ "name": "statustype", "type": "string" }, \{ "name": "ip", "type": "int" }, \{ "name": "sessionid", "type": "string" }, \{ "name": "duration", "type": "long" } ] } -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7312) Search function does not work for variable registry in root process group
[ https://issues.apache.org/jira/browse/NIFI-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Gyori updated NIFI-7312: -- Status: Patch Available (was: In Progress) > Search function does not work for variable registry in root process group > - > > Key: NIFI-7312 > URL: https://issues.apache.org/jira/browse/NIFI-7312 > Project: Apache NiFi > Issue Type: Bug >Reporter: Peter Turcsanyi >Assignee: Peter Gyori >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > https://github.com/apache/nifi/pull/4123#discussion_r401883313 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (NIFIREG-385) Make revision feature configurable
[ https://issues.apache.org/jira/browse/NIFIREG-385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Gough closed NIFIREG-385. Resolution: Resolved > Make revision feature configurable > -- > > Key: NIFIREG-385 > URL: https://issues.apache.org/jira/browse/NIFIREG-385 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.0.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > In order to make the master branch be compatible with current NiFi releases, > we should make the revision feature configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7494) CassandraSessionProvider leaks resources when authentication fails
Mark Payne created NIFI-7494: Summary: CassandraSessionProvider leaks resources when authentication fails Key: NIFI-7494 URL: https://issues.apache.org/jira/browse/NIFI-7494 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Mark Payne The @OnEnabled method of CassandraSessionProvider creates a few different resources: {code:java} Cluster newCluster = createCluster(contactPoints, sslContext, username, password, compressionType); PropertyValue keyspaceProperty = context.getProperty(KEYSPACE).evaluateAttributeExpressions(); final Session newSession; if (keyspaceProperty != null) { newSession = newCluster.connect(keyspaceProperty.getValue()); } else { newSession = newCluster.connect(); } newCluster.getConfiguration().getQueryOptions().setConsistencyLevel(ConsistencyLevel.valueOf(consistencyLevel)); Metadata metadata = newCluster.getMetadata(); log.info("Connected to Cassandra cluster: {}", new Object[]{metadata.getClusterName()}); cluster = newCluster; cassandraSession = newSession; {code} If the authentication fails, an Exception is closed from newCluster.connect(). This is done after the `Cluster` object has been created, though, and `Cluster` is Closeable. Any Exception that is thrown by the @OnEnabled method after creating the `Cluster` must be caught and `Cluster` must be closed. The Exception can then be re-thrown. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-registry] thenatog closed pull request #276: NIFIREG-385 Make revision feature configurable
thenatog closed pull request #276: URL: https://github.com/apache/nifi-registry/pull/276 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] nielsbasjes commented on pull request #3734: NIFI-6666 Add Useragent Header to InvokeHTTP requests
nielsbasjes commented on pull request #3734: URL: https://github.com/apache/nifi/pull/3734#issuecomment-635428317 I checked why my patch didn't fail the build when I first submitted it: The test that failed was added only a few weeks ago. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] apuntandoanulo commented on a change in pull request #4301: NIFI-7477 Optionally adding validation details as a new flowfile attribute
apuntandoanulo commented on a change in pull request #4301: URL: https://github.com/apache/nifi/pull/4301#discussion_r431918944 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestValidateRecord.java ## @@ -593,5 +593,49 @@ public void testValidateMaps() throws IOException, InitializationException, Malf assertEquals(2, ( (Map) ((Record) data[2]).getValue("points")).size()); } } + +@Test +public void testValidationsDetailsAttributeForInvalidRecords() throws InitializationException, UnsupportedEncodingException, IOException { +final String schema = new String(Files.readAllBytes(Paths.get("src/test/resources/TestUpdateRecord/schema/person-with-name-string.avsc")), "UTF-8"); + +final CSVReader csvReader = new CSVReader(); +runner.addControllerService("reader", csvReader); +runner.setProperty(csvReader, SchemaAccessUtils.SCHEMA_ACCESS_STRATEGY, SchemaAccessUtils.SCHEMA_TEXT_PROPERTY); +runner.setProperty(csvReader, SchemaAccessUtils.SCHEMA_TEXT, schema); +runner.setProperty(csvReader, CSVUtils.FIRST_LINE_IS_HEADER, "false"); +runner.setProperty(csvReader, CSVUtils.QUOTE_MODE, CSVUtils.QUOTE_MINIMAL.getValue()); +runner.setProperty(csvReader, CSVUtils.TRAILING_DELIMITER, "false"); +runner.enableControllerService(csvReader); + +final MockRecordWriter validWriter = new MockRecordWriter("valid", false); +runner.addControllerService("writer", validWriter); +runner.enableControllerService(validWriter); + +final MockRecordWriter invalidWriter = new MockRecordWriter("invalid", true); +runner.addControllerService("invalid-writer", invalidWriter); +runner.enableControllerService(invalidWriter); + +runner.setProperty(ValidateRecord.RECORD_READER, "reader"); +runner.setProperty(ValidateRecord.RECORD_WRITER, "writer"); +runner.setProperty(ValidateRecord.INVALID_RECORD_WRITER, "invalid-writer"); +runner.setProperty(ValidateRecord.ALLOW_EXTRA_FIELDS, "false"); +runner.setProperty(ValidateRecord.MAX_VALIDATION_DETAILS_LENGTH, "20"); +runner.setProperty(ValidateRecord.VALIDATION_DETAILS_ATTRIBUTE_NAME, "valDetails"); + +final String content = "1, John Doe\n" ++ "2, Jane Doe\n" ++ "Three, Jack Doe\n"; + +runner.enqueue(content); +runner.run(); + +runner.assertTransferCount(ValidateRecord.REL_INVALID, 1); +runner.assertTransferCount(ValidateRecord.REL_FAILURE, 0); + +final MockFlowFile invalidFlowFile = runner.getFlowFilesForRelationship(ValidateRecord.REL_INVALID).get(0); +invalidFlowFile.assertAttributeEquals("record.count", "1"); +invalidFlowFile.assertContentEquals("invalid\n\"Three\",\"Jack Doe\"\n"); +invalidFlowFile.assertAttributeExists("valDetails"); Review comment: Uni test was improved This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] apuntandoanulo commented on a change in pull request #4301: NIFI-7477 Optionally adding validation details as a new flowfile attribute
apuntandoanulo commented on a change in pull request #4301: URL: https://github.com/apache/nifi/pull/4301#discussion_r431918580 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateRecord.java ## @@ -207,6 +225,8 @@ properties.add(SCHEMA_TEXT); properties.add(ALLOW_EXTRA_FIELDS); properties.add(STRICT_TYPE_CHECKING); +properties.add(MAX_VALIDATION_DETAILS_LENGTH); Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] nielsbasjes edited a comment on pull request #3734: NIFI-6666 Add Useragent Header to InvokeHTTP requests
nielsbasjes edited a comment on pull request #3734: URL: https://github.com/apache/nifi/pull/3734#issuecomment-635406203 @MikeThomsen Yes indeed my change broke the build. The problem here is that the variable `${user.name}` is undefined in the "normal" setup and my change simply introduces it in the system with a valid value. This test assumes this value not to be set at all and thus fails because it now has an unexpected default value. I fixed this by renaming this test variable from user.name to login.name This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-7493) XML Schema Inference can infer a type of String when it should be Record
[ https://issues.apache.org/jira/browse/NIFI-7493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-7493: - Description: >From the mailing list: {quote}I have configured a XMLReader to use the Infer Schema. The other issue is that I have problems converting sub records. My records looks something like this: John Doe some there workingman New York A Company The issues are with the subrecords in part 3. I have configured the XMLReader property "Field Name for Content" = value When the data is being converted via a XMLWriter the output for the additionalInfo fields looks like this: MapRecord[\{name=Location, value=New York}] MapRecord[\{name=Company, value=A Company}] If I use a JSONWriter I gets this: "Part3": { "Details": { "additionalInfo": [ "MapRecord[\{name=Location, value=New York}]", "MapRecord[\{name=Company, value=A Company}]" ] } }{quote} The issue appears to be that "additionalInfo" is being inferred as a String, but the XML Reader is returning a Record. This is probably because the "additionalInfo" element contains String content and no child nodes. However, it does have attributes. As a result, the XML Reader will return a Record. I'm guessing that attributes are not taken into account in the schema inference, though, and since "additionalInfo" has no child nodes but has textual content, it must be a String. was: >From the mailing list: I have configured a XMLReader to use the Infer Schema. The other issue is that I have problems converting sub records. My records looks something like this: John Doe some there workingman New York A Company The issues are with the subrecords in part 3. I have configured the XMLReader property "Field Name for Content" = value When the data is being converted via a XMLWriter the output for the additionalInfo fields looks like this: MapRecord[\{name=Location, value=New York}] MapRecord[\{name=Company, value=A Company}] If I use a JSONWriter I gets this: "Part3": { "Details": { "additionalInfo": [ "MapRecord[\{name=Location, value=New York}]", "MapRecord[\{name=Company, value=A Company}]" ] } } The issue appears to be that "additionalInfo" is being inferred as a String, but the XML Reader is returning a Record. This is probably because the "additionalInfo" element contains String content and no child nodes. However, it does have attributes. As a result, the XML Reader will return a Record. I'm guessing that attributes are not taken into account in the schema inference, though, and since "additionalInfo" has no child nodes but has textual content, it must be a String. > XML Schema Inference can infer a type of String when it should be Record > > > Key: NIFI-7493 > URL: https://issues.apache.org/jira/browse/NIFI-7493 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Priority: Major > > From the mailing list: > {quote}I have configured a XMLReader to use the Infer Schema. The other issue > is that I have problems converting sub records. My records looks something > like this: John Doe > some there > workingman > New York > A Company > > > > > The issues are with the subrecords in part 3. I have configured the XMLReader > property "Field Name for Content" = value > > When the data is being converted via a XMLWriter the output for the > additionalInfo fields looks like this: > MapRecord[\{name=Location, > value=New York}] > MapRecord[\{name=Company, value=A > Company}] > > > > If I use a JSONWriter I gets this: > "Part3": { "Details": { > "additionalInfo": [ "MapRecord[\{name=Location, value=New York}]", > "MapRecord[\{name=Company, value=A Company}]" ] > } > }{quote} > The issue appears to be that "additionalInfo" is being inferred as a String, > but the XML Reader is returning a Record. > > This is probably because the "additionalInfo" element contains String > content and no child nodes. However, it does have attributes. As a result, > the XML Reader will return a Record. I'm guessing that attributes are not > taken into account in the schema inference, though, and since > "additionalInfo" has no child nodes but has textual content, it must be a > String. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi] nielsbasjes commented on pull request #3734: NIFI-6666 Add Useragent Header to InvokeHTTP requests
nielsbasjes commented on pull request #3734: URL: https://github.com/apache/nifi/pull/3734#issuecomment-635406203 @MikeThomsen Yes indeed my change broke the build. The problem here is that the variable `${user.name}` is undefined in the "normal" setup and my change simply introduces in in the system with a valid value. This test assumes this value not to be set at all and thus fails because it now has an unexpected default value. I fixed this by renaming this test variable from user.name to login.name This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-7493) XML Schema Inference can infer a type of String when it should be Record
Mark Payne created NIFI-7493: Summary: XML Schema Inference can infer a type of String when it should be Record Key: NIFI-7493 URL: https://issues.apache.org/jira/browse/NIFI-7493 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Mark Payne >From the mailing list: I have configured a XMLReader to use the Infer Schema. The other issue is that I have problems converting sub records. My records looks something like this: John Doe some there workingman New York A Company The issues are with the subrecords in part 3. I have configured the XMLReader property "Field Name for Content" = value When the data is being converted via a XMLWriter the output for the additionalInfo fields looks like this: MapRecord[\{name=Location, value=New York}] MapRecord[\{name=Company, value=A Company}] If I use a JSONWriter I gets this: "Part3": { "Details": { "additionalInfo": [ "MapRecord[\{name=Location, value=New York}]", "MapRecord[\{name=Company, value=A Company}]" ] } } The issue appears to be that "additionalInfo" is being inferred as a String, but the XML Reader is returning a Record. This is probably because the "additionalInfo" element contains String content and no child nodes. However, it does have attributes. As a result, the XML Reader will return a Record. I'm guessing that attributes are not taken into account in the schema inference, though, and since "additionalInfo" has no child nodes but has textual content, it must be a String. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
szaszm commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431894027 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd Review comment: Sry, I didn't fully understand that part at a first reading. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] pgyori opened a new pull request #4303: NIFI-7312: Enable search in variable registry of root process group
pgyori opened a new pull request #4303: URL: https://github.com/apache/nifi/pull/4303 https://issues.apache.org/jira/browse/NIFI-7312 Description of PR Enables search in variable registry of root process group. ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `master`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (MINIFICPP-1242) Fix Windows event log iteration
[ https://issues.apache.org/jira/browse/MINIFICPP-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpad Boda resolved MINIFICPP-1242. --- Resolution: Fixed > Fix Windows event log iteration > --- > > Key: MINIFICPP-1242 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1242 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Affects Versions: 0.7.0 >Reporter: Arpad Boda >Assignee: Arpad Boda >Priority: Major > Fix For: 0.8.0 > > > There are some errors in Windows event log iteration: > -No reasonable timeout specified > -Errors are not handled/logged properly -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1240) Check last modified timestamp of the inode before re-reading the script file of ExecutePythonScript
[ https://issues.apache.org/jira/browse/MINIFICPP-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1240: Description: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *without* any update on the script file inbetween *THEN* There should be no log line stating that the script was reloaded *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution, the new script file should be executed and a re-read should be logged *Background:* The ExecutePythonScriptProcessor currently rereads the script file every time it is on schedule. This is suboptimal. *Proposal:* As an optimization we may want to check the last modified timestamp of the inode before reading the file. was: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *without* any update on the script file inbetween *THEN* There should be no log line stating that the script was reloaded *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution, the new script file should be executed and a re-read should be logged *Background:* The ExecutePythonScriptProcessor currently rereads the script file every time it is on schedule. This is suboptimal. *Proposal:* As an optimization we may want to check the last modified timestamp of the inode before reading the file. > Check last modified timestamp of the inode before re-reading the script file > of ExecutePythonScript > --- > > Key: MINIFICPP-1240 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1240 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.7.0 >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *without* any update on the > script file inbetween > *THEN* There should be no log line stating that the script was reloaded > *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution, the new script file should be executed and a > re-read should be logged > *Background:* > The ExecutePythonScriptProcessor currently rereads the script file every time > it is on schedule. This is suboptimal. > *Proposal:* > As an optimization we may want to check the last modified timestamp of the > inode before reading the file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (MINIFICPP-1223) Stop reloading script files every time ExecutePythonProcessor is triggered
[ https://issues.apache.org/jira/browse/MINIFICPP-1223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Hunyadi updated MINIFICPP-1223: Description: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" not set) -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" disabled) -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" enabled) -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should follow the updated script *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to add an option called *"Reload on Script Change"* to toggle this with the first major release. was: *Acceptance criteria:* *GIVEN* A Getfile -> ExecutePythonScript -> Putfile workflow *WHEN* The ExecutePythonScript is run twice *with* an update on the script file inbetween *THEN* On the second execution the behaviour of the ExecuteScriptProcessor should not change *Background:* For backward compatibility, we went for keeping the behaviour of reading the script file every time the processor is triggered intact. *Proposal:* We would like to change with the first major release. > Stop reloading script files every time ExecutePythonProcessor is triggered > -- > > Key: MINIFICPP-1223 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1223 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Adam Hunyadi >Assignee: Adam Hunyadi >Priority: Minor > Fix For: 1.0.0 > > > *Acceptance criteria:* > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" not > set) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > disabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should not change > *GIVEN* A Getfile -> ExecutePythonScript (with "Reload on Script Change" > enabled) -> Putfile workflow > *WHEN* The ExecutePythonScript is run twice *with* an update on the script > file inbetween > *THEN* On the second execution the behaviour of the ExecuteScriptProcessor > should follow the updated script > *Background:* > For backward compatibility, we went for keeping the behaviour of reading the > script file every time the processor is triggered intact. > *Proposal:* > We would like to add an option called *"Reload on Script Change"* to toggle > this with the first major release. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
hunyadi-dev commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r430369227 ## File path: extensions/script/python/ExecutePythonProcessor.cpp ## @@ -46,144 +50,177 @@ core::Relationship ExecutePythonProcessor::Failure("failure", "Script failures") void ExecutePythonProcessor::initialize() { // initialization requires that we do a little leg work prior to onSchedule // so that we can provide manifest our processor identity - std::set properties; - - std::string prop; - getProperty(ScriptFile.getName(), prop); - - properties.insert(ScriptFile); - properties.insert(ModuleDirectory); - setSupportedProperties(properties); - - std::set relationships; - relationships.insert(Success); - relationships.insert(Failure); - setSupportedRelationships(std::move(relationships)); - setAcceptAllProperties(); - if (!prop.empty()) { -setProperty(ScriptFile, prop); -std::shared_ptr engine; -python_logger_ = logging::LoggerFactory::getAliasedLogger(getName()); + if (getProperties().empty()) { +setSupportedProperties({ + ScriptFile, + ScriptBody, + ModuleDirectory +}); +setAcceptAllProperties(); +setSupportedRelationships({ + Success, + Failure +}); +valid_init_ = false; +return; + } -engine = createEngine(); + python_logger_ = logging::LoggerFactory::getAliasedLogger(getName()); -if (engine == nullptr) { - throw std::runtime_error("No script engine available"); -} + getProperty(ModuleDirectory.getName(), module_directory_); -try { - engine->evalFile(prop); - auto me = shared_from_this(); - triggerDescribe(engine, me); - triggerInitialize(engine, me); + valid_init_ = false; + appendPathForImportModules(); + loadScript(); + try { +if ("" != script_to_exec_) { + std::shared_ptr engine = getScriptEngine(); + engine->eval(script_to_exec_); + auto shared_this = shared_from_this(); + engine->describe(shared_this); + engine->onInitialize(shared_this); + handleEngineNoLongerInUse(std::move(engine)); valid_init_ = true; -} catch (std::exception ) { - logger_->log_error("Caught Exception %s", exception.what()); - engine = nullptr; - std::rethrow_exception(std::current_exception()); - valid_init_ = false; -} catch (...) { - logger_->log_error("Caught Exception"); - engine = nullptr; - std::rethrow_exception(std::current_exception()); - valid_init_ = false; } - + } + catch (const std::exception& exception) { +logger_->log_error("Caught Exception: %s", exception.what()); +std::rethrow_exception(std::current_exception()); + } + catch (...) { +logger_->log_error("Caught Exception"); +std::rethrow_exception(std::current_exception()); } } void ExecutePythonProcessor::onSchedule(const std::shared_ptr , const std::shared_ptr ) { if (!valid_init_) { -throw std::runtime_error("Could not correctly in initialize " + getName()); - } - context->getProperty(ScriptFile.getName(), script_file_); - context->getProperty(ModuleDirectory.getName(), module_directory_); - if (script_file_.empty() && script_engine_.empty()) { -logger_->log_error("Script File must be defined"); -return; +throw std::runtime_error("Could not correctly initialize " + getName()); } - try { -std::shared_ptr engine; - -// Use an existing engine, if one is available -if (script_engine_q_.try_dequeue(engine)) { - logger_->log_debug("Using available %s script engine instance", script_engine_); -} else { - logger_->log_info("Creating new %s script instance", script_engine_); - logger_->log_info("Approximately %d %s script instances created for this processor", script_engine_q_.size_approx(), script_engine_); - - engine = createEngine(); - - if (engine == nullptr) { -throw std::runtime_error("No script engine available"); - } - - if (!script_file_.empty()) { -engine->evalFile(script_file_); - } else { -throw std::runtime_error("No Script File is available to execute"); - } +// TODO(hunyadi): When using "Script File" property, we currently re-read the script file content every time the processor is on schedule. This should change to single-read when we release 1.0.0 +// https://issues.apache.org/jira/browse/MINIFICPP-1223 +reloadScriptIfUsingScriptFileProperty(); Review comment: 1) I am happy to do the changes for this. I will raise this as a question the next time we have a team meeting. 2) Good idea, but this change should have its own Jira. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
szaszm commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431868440 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} + +std::string getScriptFullPath(const std::string& script_file_name) { + return SCRIPT_FILES_DIRECTORY + utils::file::FileUtils::get_separator() + script_file_name; +} + +const std::string TEST_FILE_NAME{ "test_file.txt" }; +const std::string TEST_FILE_CONTENT{ "Test text\n" }; +const std::string SCRIPT_FILES_DIRECTORY{ "test_scripts" }; Review comment: That's right. It's a good idea to keep them near the class implementation, but I would keep them inside the class as they are just constants. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
hunyadi-dev commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431867122 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} + +std::string getScriptFullPath(const std::string& script_file_name) { + return SCRIPT_FILES_DIRECTORY + utils::file::FileUtils::get_separator() + script_file_name; +} + +const std::string TEST_FILE_NAME{ "test_file.txt" }; +const std::string TEST_FILE_CONTENT{ "Test text\n" }; +const std::string SCRIPT_FILES_DIRECTORY{ "test_scripts" }; + +std::unique_ptr testController_; +std::shared_ptr plan_; +LogTestController& logTestController_; +std::shared_ptr logger_; +}; + +class SimplePythonFlowFileTransferTest : public ExecutePythonProcessorTestBase { + public: + enum class Expectation { +OUTPUT_FILE_MATCHES_INPUT, +RUNTIME_RELATIONSHIP_EXCEPTION, +PROCESSOR_INITIALIZATION_EXCEPTION + }; + SimplePythonFlowFileTransferTest() : ExecutePythonProcessorTestBase{} {} + + protected: + void testSimpleFilePassthrough(const Expectation expectation, const core::Relationship& execute_python_out_conn, const std::string& used_as_script_file, const std::string& used_as_script_body) { +reInitialize(); +const std::string input_dir = createTempDirWithFile(TEST_FILE_NAME, TEST_FILE_CONTENT); +const std::string output_dir = createTempDir(); + +addGetFileProcessorToPlan(input_dir); +if (Expectation::PROCESSOR_INITIALIZATION_EXCEPTION == expectation) { + REQUIRE_THROWS(addExecutePythonProcessorToPlan(used_as_script_file, used_as_script_body)); + return; +} +
[GitHub] [nifi] apuntandoanulo commented on a change in pull request #4301: NIFI-7477 Optionally adding validation details as a new flowfile attribute
apuntandoanulo commented on a change in pull request #4301: URL: https://github.com/apache/nifi/pull/4301#discussion_r431866633 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateRecord.java ## @@ -180,6 +180,24 @@ .defaultValue("true") .required(true) .build(); +static final PropertyDescriptor MAX_VALIDATION_DETAILS_LENGTH = new PropertyDescriptor.Builder() +.name("maximum-validation-details-length") +.displayName("Maximum Validation Details Length") +.description("Specifies the maximum number of characters that validation details value can have. Any characters beyond the max will be truncated.") Review comment: You're right! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
hunyadi-dev commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431866934 ## File path: libminifi/test/script-tests/PythonExecuteScriptTests.cpp ## @@ -29,6 +29,15 @@ #include "processors/GetFile.h" #include "processors/PutFile.h" +// ,-, +// | ! | Disclaimer | ! | +// |---' '---' +// | | +// | This file contains tests for the "ExecuteScript" processor, | +// | not for the "ExecutePython" processor. | +// | | +// '-' Review comment: Renamed the tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] jairo-henao commented on a change in pull request #4301: NIFI-7477 Optionally adding validation details as a new flowfile attribute
jairo-henao commented on a change in pull request #4301: URL: https://github.com/apache/nifi/pull/4301#discussion_r431865765 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateRecord.java ## @@ -180,6 +180,24 @@ .defaultValue("true") .required(true) .build(); +static final PropertyDescriptor MAX_VALIDATION_DETAILS_LENGTH = new PropertyDescriptor.Builder() +.name("maximum-validation-details-length") +.displayName("Maximum Validation Details Length") +.description("Specifies the maximum number of characters that validation details value can have. Any characters beyond the max will be truncated.") Review comment: You're right! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
hunyadi-dev commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431865524 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd Review comment: Yes, I already said I have realized that this was there as a bug as well :) https://user-images.githubusercontent.com/64011968/83152173-caee3400-a0fd-11ea-8be1-bd919e286c87.png;> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
hunyadi-dev commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431862488 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} Review comment: I would argue that the workflow would not be any more reusable, as a having a `TestController` is a dependency either way. Also, I would rather have the REQUIRE assertions inside the test-helper function for two reasons: - Someone can easily forget adding EXPECT_THROW on the call side ending up accidentaly satisfying "EXPECT_THROW" calls on an upper call stack. - Adding the expectation for throwing would make the code would look like we test the helper functions inside of unit tests, and might not be obvious why is there an expectation for not throwing for a given test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
hunyadi-dev commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431862488 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} Review comment: 1. I would argue that the workflow would not be any more reusable, as a having a `TestController` is a dependency either way. Also, I would rather have the REQUIRE assertions inside the test-helper function for two reasons: - Someone can easily forget adding EXPECT_THROW on the call side ending up accidentaly satisfying "EXPECT_THROW" calls on an upper call stack. - Adding the expectation for throwing would make the code would look like we test the helper functions inside of unit tests, and might not be obvious why is there an expectation for not throwing for a given test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
szaszm commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431860874 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd Review comment: ``` #include ``` shouldn't be inside an anonymous namespace. We shouldn't mess with the linkage of declarations of system headers. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
szaszm commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431859372 ## File path: libminifi/test/script-tests/CMakeLists.txt ## @@ -19,6 +19,13 @@ if (NOT DISABLE_PYTHON_SCRIPTING) file(GLOB EXECUTESCRIPT_PYTHON_INTEGRATION_TESTS "Python*.cpp") + file(GLOB EXECUTEPYTHONPROCESSOR_UNIT_TESTS "ExecutePythonProcessorTests.cpp") + file(GLOB PY_SOURCES "python/*.cpp") + find_package(PythonLibs 3.5) + if (NOT PYTHONLIBS_FOUND) + find_package(PythonLibs 3.0 REQUIRED) + endif() Review comment: Ok, then leave as is for now. Thanks for the clarification. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
hunyadi-dev commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431855542 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} + +std::string getScriptFullPath(const std::string& script_file_name) { + return SCRIPT_FILES_DIRECTORY + utils::file::FileUtils::get_separator() + script_file_name; +} + +const std::string TEST_FILE_NAME{ "test_file.txt" }; +const std::string TEST_FILE_CONTENT{ "Test text\n" }; +const std::string SCRIPT_FILES_DIRECTORY{ "test_scripts" }; Review comment: I mean the initialization of this static member goes outside the class, so if people check the declaration they won't immediately see these values. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mattyb149 commented on a change in pull request #4301: NIFI-7477 Optionally adding validation details as a new flowfile attribute
mattyb149 commented on a change in pull request #4301: URL: https://github.com/apache/nifi/pull/4301#discussion_r431851903 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateRecord.java ## @@ -207,6 +225,8 @@ properties.add(SCHEMA_TEXT); properties.add(ALLOW_EXTRA_FIELDS); properties.add(STRICT_TYPE_CHECKING); +properties.add(MAX_VALIDATION_DETAILS_LENGTH); Review comment: Although the order of properties is not guaranteed (in the UI, e.g.), they tend to show up in the order they were added. Though not a requirement, I recommend switching the order of these properties just for clarity, totally up to you though. ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestValidateRecord.java ## @@ -593,5 +593,49 @@ public void testValidateMaps() throws IOException, InitializationException, Malf assertEquals(2, ( (Map) ((Record) data[2]).getValue("points")).size()); } } + +@Test +public void testValidationsDetailsAttributeForInvalidRecords() throws InitializationException, UnsupportedEncodingException, IOException { +final String schema = new String(Files.readAllBytes(Paths.get("src/test/resources/TestUpdateRecord/schema/person-with-name-string.avsc")), "UTF-8"); + +final CSVReader csvReader = new CSVReader(); +runner.addControllerService("reader", csvReader); +runner.setProperty(csvReader, SchemaAccessUtils.SCHEMA_ACCESS_STRATEGY, SchemaAccessUtils.SCHEMA_TEXT_PROPERTY); +runner.setProperty(csvReader, SchemaAccessUtils.SCHEMA_TEXT, schema); +runner.setProperty(csvReader, CSVUtils.FIRST_LINE_IS_HEADER, "false"); +runner.setProperty(csvReader, CSVUtils.QUOTE_MODE, CSVUtils.QUOTE_MINIMAL.getValue()); +runner.setProperty(csvReader, CSVUtils.TRAILING_DELIMITER, "false"); +runner.enableControllerService(csvReader); + +final MockRecordWriter validWriter = new MockRecordWriter("valid", false); +runner.addControllerService("writer", validWriter); +runner.enableControllerService(validWriter); + +final MockRecordWriter invalidWriter = new MockRecordWriter("invalid", true); +runner.addControllerService("invalid-writer", invalidWriter); +runner.enableControllerService(invalidWriter); + +runner.setProperty(ValidateRecord.RECORD_READER, "reader"); +runner.setProperty(ValidateRecord.RECORD_WRITER, "writer"); +runner.setProperty(ValidateRecord.INVALID_RECORD_WRITER, "invalid-writer"); +runner.setProperty(ValidateRecord.ALLOW_EXTRA_FIELDS, "false"); +runner.setProperty(ValidateRecord.MAX_VALIDATION_DETAILS_LENGTH, "20"); +runner.setProperty(ValidateRecord.VALIDATION_DETAILS_ATTRIBUTE_NAME, "valDetails"); + +final String content = "1, John Doe\n" ++ "2, Jane Doe\n" ++ "Three, Jack Doe\n"; + +runner.enqueue(content); +runner.run(); + +runner.assertTransferCount(ValidateRecord.REL_INVALID, 1); +runner.assertTransferCount(ValidateRecord.REL_FAILURE, 0); + +final MockFlowFile invalidFlowFile = runner.getFlowFilesForRelationship(ValidateRecord.REL_INVALID).get(0); +invalidFlowFile.assertAttributeEquals("record.count", "1"); +invalidFlowFile.assertContentEquals("invalid\n\"Three\",\"Jack Doe\"\n"); +invalidFlowFile.assertAttributeExists("valDetails"); Review comment: Since you are setting the max details length to 20, probably a good idea to verify that here. If the output is deterministic for this test, you could also verify the actual attribute value. ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ValidateRecord.java ## @@ -180,6 +180,24 @@ .defaultValue("true") .required(true) .build(); +static final PropertyDescriptor MAX_VALIDATION_DETAILS_LENGTH = new PropertyDescriptor.Builder() +.name("maximum-validation-details-length") +.displayName("Maximum Validation Details Length") +.description("Specifies the maximum number of characters that validation details value can have. Any characters beyond the max will be truncated.") Review comment: Just for completeness, it would be good to mention in this description that this property is only used if `Validation Details Attribute Name` is set. Also since it's being evaluated at the same time as the attribute name, what do you think about supporting FlowFile attributes for the expression evaluation? This is an automated message from the Apache Git Service. To respond to the message, please log on to
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
szaszm commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431852482 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} + +std::string getScriptFullPath(const std::string& script_file_name) { + return SCRIPT_FILES_DIRECTORY + utils::file::FileUtils::get_separator() + script_file_name; +} + +const std::string TEST_FILE_NAME{ "test_file.txt" }; +const std::string TEST_FILE_CONTENT{ "Test text\n" }; +const std::string SCRIPT_FILES_DIRECTORY{ "test_scripts" }; Review comment: How does making them `static` decrease readability? If you mean the added `static` keyword increasing the line length, I think the added information of shared constant objects balances that. An alternative is extracting them from the class and putting them in an anonymous namespace, keeping the lines short and the scope limited to the translation unit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-6672) Expression language plus operation doesn't check for overflow
[ https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro D'Armiento reassigned NIFI-6672: --- Assignee: Alessandro D'Armiento > Expression language plus operation doesn't check for overflow > - > > Key: NIFI-6672 > URL: https://issues.apache.org/jira/browse/NIFI-6672 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.9.2 >Reporter: Alessandro D'Armiento >Assignee: Alessandro D'Armiento >Priority: Major > Fix For: 1.12.0 > > Attachments: image-2019-09-14-17-32-58-740.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > To reproduce the bug, create a FF with an attribute equals to Long.MAX, then > add 100 to that attribute in a following UpdateAttribute processor. The > property will overflow to a negative number without throwing any exception > !image-2019-09-14-17-32-58-740.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6674) Expression language minus operation doesn't check for overflow
[ https://issues.apache.org/jira/browse/NIFI-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro D'Armiento reassigned NIFI-6674: --- Assignee: Alessandro D'Armiento > Expression language minus operation doesn't check for overflow > -- > > Key: NIFI-6674 > URL: https://issues.apache.org/jira/browse/NIFI-6674 > Project: Apache NiFi > Issue Type: Bug >Reporter: Alessandro D'Armiento >Assignee: Alessandro D'Armiento >Priority: Major > Fix For: 1.12.0 > > Attachments: image-2019-09-14-17-51-41-809.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > To reproduce the bug, create a FF with an attribute equals to Long.MIN, then > subtract 100 to that attribute in a following UpdateAttribute processor. The > property will overflow to a positive number without throwing any exception > !image-2019-09-14-17-51-41-809.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-6673) Expression language multiply operation doesn't check for overflow
[ https://issues.apache.org/jira/browse/NIFI-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro D'Armiento reassigned NIFI-6673: --- Assignee: Alessandro D'Armiento > Expression language multiply operation doesn't check for overflow > - > > Key: NIFI-6673 > URL: https://issues.apache.org/jira/browse/NIFI-6673 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.9.2 >Reporter: Alessandro D'Armiento >Assignee: Alessandro D'Armiento >Priority: Major > Fix For: 1.12.0 > > Attachments: image-2019-09-14-17-38-19-397.png > > Time Spent: 20m > Remaining Estimate: 0h > > To reproduce the bug, create a FF with an attribute equals to Long.MAX, then > multiply it by 2 to that attribute in a following UpdateAttribute processor. > The property will overflow to a negative number without throwing any exception > !image-2019-09-14-17-38-19-397.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #784: MINIFICPP-1206 - Rework and test ExecutePythonProcessor, add in-place script support
szaszm commented on a change in pull request #784: URL: https://github.com/apache/nifi-minifi-cpp/pull/784#discussion_r431848847 ## File path: libminifi/test/script-tests/ExecutePythonProcessorTests.cpp ## @@ -0,0 +1,276 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#define CATCH_CONFIG_MAIN + +#include +#include +#include + +#include "../TestBase.h" + +#include "processors/GetFile.h" +#include "python/ExecutePythonProcessor.h" +#include "processors/LogAttribute.h" +#include "processors/PutFile.h" +#include "utils/file/FileUtils.h" + +namespace { + +#include +#define GetCurrentDir getcwd + +std::string GetCurrentWorkingDir(void) { + char buff[FILENAME_MAX]; + GetCurrentDir(buff, FILENAME_MAX); + std::string current_working_dir(buff); + return current_working_dir; +} + +class ExecutePythonProcessorTestBase { + public: +ExecutePythonProcessorTestBase() : + logTestController_(LogTestController::getInstance()), + logger_(logging::LoggerFactory::getLogger()) { + reInitialize(); +} +virtual ~ExecutePythonProcessorTestBase() { + logTestController_.reset(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); + logTestController_.setDebug(); +} + + protected: +void reInitialize() { + testController_.reset(new TestController()); + plan_ = testController_->createPlan(); +} + +std::string createTempDir() { + char dirtemplate[] = "/tmp/gt.XX"; + std::string temp_dir = testController_->createTempDirectory(dirtemplate); + REQUIRE(!temp_dir.empty()); + struct stat buffer; + REQUIRE(-1 != stat(temp_dir.c_str(), )); + REQUIRE(S_ISDIR(buffer.st_mode)); + return temp_dir; +} + +std::string putFileToDir(const std::string& dir_path, const std::string& file_name, const std::string& content) { + std::string file_path(dir_path + utils::file::FileUtils::get_separator() + file_name); + std::ofstream out_file(file_path); + if (out_file.is_open()) { +out_file << content; +out_file.close(); + } + return file_path; +} + +std::string createTempDirWithFile(const std::string& file_name, const std::string& content) { + std::string temp_dir = createTempDir(); + putFileToDir(temp_dir, file_name, content); + return temp_dir; +} + +std::string getFileContent(const std::string& file_name) { + std::ifstream file_handle(file_name); + REQUIRE(file_handle.is_open()); + const std::string file_content{ (std::istreambuf_iterator(file_handle)), (std::istreambuf_iterator())}; + file_handle.close(); + return file_content; +} Review comment: 1. I'd replace REQUIRE with check + throw, so that they're reusable and the control flow is the same (tests fail on uncaught exceptions). 3. The first function only operates on the file it the open was successful. If not, there is no indication of the error to the caller. Since there is already some kind of check, I recommend throwing an exception in the else branch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-6287) Add ability to hash an attribute or value in expression language
[ https://issues.apache.org/jira/browse/NIFI-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118667#comment-17118667 ] ASF subversion and git services commented on NIFI-6287: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Add ability to hash an attribute or value in expression language > > > Key: NIFI-6287 > URL: https://issues.apache.org/jira/browse/NIFI-6287 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Ed Jackson >Assignee: Phillip Grenier >Priority: Trivial > Labels: expression-language > Fix For: 1.12.0 > > > Similar to 6255 > > In expression language it would be very useful to hash arbitrary data or > attributes from the incoming flow file. For example, if the incoming flow > file has an attribute called 'serial_num', the user can hash this value in > expression language like `${hash('MD5', 'serial_num')}` or similar syntax. > > Today users need to add a CryptographicHashAttribute processor to accomplish > this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7485) Update dependency
[ https://issues.apache.org/jira/browse/NIFI-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118681#comment-17118681 ] ASF subversion and git services commented on NIFI-7485: --- Commit aa804cfcebea23f1316b0c5ad5a6140bec57de01 in nifi's branch refs/heads/MINIFI-422 from Mike Thomsen [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=aa804cf ] NIFI-7485 Updated commons-configuration2. NIFI-7485 Found more instances that needed updating. This closes #4295 > Update dependency > - > > Key: NIFI-7485 > URL: https://issues.apache.org/jira/browse/NIFI-7485 > Project: Apache NiFi > Issue Type: Task >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Minor > Fix For: 1.12.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > We need to update nifi-security-utils to use newer commons components. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires
[ https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118678#comment-17118678 ] ASF subversion and git services commented on NIFI-7453: --- Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch refs/heads/MINIFI-422 from Tamas Palfy [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ] NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT NIFI-7453 Creating a new Kudu client when refreshing TGT in KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.) NIFI-7453 Safely closing old Kudu client before creating a new one. NIFI-7453 Visibility adjustment. This closes #4276. Signed-off-by: Peter Turcsanyi > PutKudu kerberos issue after TGT expires > - > > Key: NIFI-7453 > URL: https://issues.apache.org/jira/browse/NIFI-7453 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Major > Fix For: 1.12.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > When PutKudu is used with kerberos authentication, it stops working when the > TGT expires with the following logs/exceptions: > {noformat} > ERROR org.apache.nifi.processors.kudu.PutKudu: > PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row > error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, > server=null, status=Runtime error: cannot re-acquire authentication token > after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions > received: [org.apache.kudu.client.NonRecoverableException: server requires > authentication, but client does not have Kerberos credentials (tgt). > Authentication tokens were not used because this connection will be used to > acquire a new token and therefore requires primary credentials]) > 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable > to connect to master HOST:PORT: server requires authentication, but client > does not have Kerberos credentials (tgt). Authentication tokens were not used > because this connection will be used to acquire a new token and therefore > requires primary credentials > 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: > unexpected tablet lookup failure for operation KuduRpc(method=Write, > tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces) > org.apache.kudu.client.NonRecoverableException: cannot re-acquire > authentication token after 5 attempts (Couldn't find a valid master in > (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover > ableException: server requires authentication, but client does not have > Kerberos credentials (tgt). Authentication tokens were not used because this > connection will be used to acquire a new token and therefore requires primary > credentials]) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141) > at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280) > at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259) > at com.stumbleupon.async.Deferred.callback(Deferred.java:1002) > at > org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246) > ... > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7445) Add Conflict Resolution property to PutAzureDataLakeStorage processor
[ https://issues.apache.org/jira/browse/NIFI-7445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118682#comment-17118682 ] ASF subversion and git services commented on NIFI-7445: --- Commit 1dd0e920402d20917bf3bf421ce14ab3dc0749a5 in nifi's branch refs/heads/MINIFI-422 from Peter Gyori [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1dd0e92 ] NIFI-7445: Add Conflict Resolution property to PutAzureDataLakeStorage processor NIFI-7445: Add Conflict Resolution property to PutAzureDataLakeStorage processor Made warning and error messages more informative. Refactored flowFile assertion in the tests. This closes #4287. Signed-off-by: Peter Turcsanyi > Add Conflict Resolution property to PutAzureDataLakeStorage processor > - > > Key: NIFI-7445 > URL: https://issues.apache.org/jira/browse/NIFI-7445 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Peter Turcsanyi >Assignee: Peter Gyori >Priority: Major > Labels: azure > Fix For: 1.12.0 > > Time Spent: 40m > Remaining Estimate: 0h > > PutAzureDataLakeStorage currently overwrites existing files without error > (azure-storage-file-datalake 12.0.1). > Add Conflict Resolution property with values: fail (default), replace, ignore > (similar to PutFile). > DataLakeDirectoryClient.createFile(String fileName, boolean overwrite) can be > used (available from 12.1.x) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7485) Update dependency
[ https://issues.apache.org/jira/browse/NIFI-7485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118680#comment-17118680 ] ASF subversion and git services commented on NIFI-7485: --- Commit aa804cfcebea23f1316b0c5ad5a6140bec57de01 in nifi's branch refs/heads/MINIFI-422 from Mike Thomsen [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=aa804cf ] NIFI-7485 Updated commons-configuration2. NIFI-7485 Found more instances that needed updating. This closes #4295 > Update dependency > - > > Key: NIFI-7485 > URL: https://issues.apache.org/jira/browse/NIFI-7485 > Project: Apache NiFi > Issue Type: Task >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Minor > Fix For: 1.12.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > We need to update nifi-security-utils to use newer commons components. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires
[ https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118677#comment-17118677 ] ASF subversion and git services commented on NIFI-7453: --- Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch refs/heads/MINIFI-422 from Tamas Palfy [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ] NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT NIFI-7453 Creating a new Kudu client when refreshing TGT in KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.) NIFI-7453 Safely closing old Kudu client before creating a new one. NIFI-7453 Visibility adjustment. This closes #4276. Signed-off-by: Peter Turcsanyi > PutKudu kerberos issue after TGT expires > - > > Key: NIFI-7453 > URL: https://issues.apache.org/jira/browse/NIFI-7453 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Major > Fix For: 1.12.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > When PutKudu is used with kerberos authentication, it stops working when the > TGT expires with the following logs/exceptions: > {noformat} > ERROR org.apache.nifi.processors.kudu.PutKudu: > PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row > error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, > server=null, status=Runtime error: cannot re-acquire authentication token > after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions > received: [org.apache.kudu.client.NonRecoverableException: server requires > authentication, but client does not have Kerberos credentials (tgt). > Authentication tokens were not used because this connection will be used to > acquire a new token and therefore requires primary credentials]) > 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable > to connect to master HOST:PORT: server requires authentication, but client > does not have Kerberos credentials (tgt). Authentication tokens were not used > because this connection will be used to acquire a new token and therefore > requires primary credentials > 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: > unexpected tablet lookup failure for operation KuduRpc(method=Write, > tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces) > org.apache.kudu.client.NonRecoverableException: cannot re-acquire > authentication token after 5 attempts (Couldn't find a valid master in > (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover > ableException: server requires authentication, but client does not have > Kerberos credentials (tgt). Authentication tokens were not used because this > connection will be used to acquire a new token and therefore requires primary > credentials]) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141) > at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280) > at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259) > at com.stumbleupon.async.Deferred.callback(Deferred.java:1002) > at > org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246) > ... > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-7477) Get the details in ValidateRecord as an attribute
[ https://issues.apache.org/jira/browse/NIFI-7477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-7477: --- Affects Version/s: (was: 1.11.4) Status: Patch Available (was: Open) > Get the details in ValidateRecord as an attribute > - > > Key: NIFI-7477 > URL: https://issues.apache.org/jira/browse/NIFI-7477 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Jairo Henao >Assignee: Jairo Henao >Priority: Minor > Labels: features > Time Spent: 10m > Remaining Estimate: 0h > > When validation fails in ValidateRecord, the details are not easy to access. > Details are sent as an event to Provenance. To obtain them, we should invoke > the NIFI REST-API or export them via the Site-to-Site Reporting Task. > The ValidateRecord processor should optionally allow us to configure an > attribute to leave the details text here. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6672) Expression language plus operation doesn't check for overflow
[ https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118669#comment-17118669 ] ASF subversion and git services commented on NIFI-6672: --- Commit b0251178243650e3256a8b0649f0196c6868fcba in nifi's branch refs/heads/MINIFI-422 from Alessandro D'Armiento [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b025117 ] NIFI-6672 PlusEvaluator throws an Arithmetic Exception in case of Long overflow. TestQuery checks that Long overflow is detected and Double overflow is correctly promoted to POSITIVE_INFINITY The behaviour change is reverted until further investigations. The overflow behaviour is still enforced by unit tests and documented in the expression language doc NIFI-6672 Removed test code. This closes #3738 Signed-off-by: Mike Thomsen > Expression language plus operation doesn't check for overflow > - > > Key: NIFI-6672 > URL: https://issues.apache.org/jira/browse/NIFI-6672 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.9.2 >Reporter: Alessandro D'Armiento >Priority: Major > Fix For: 1.12.0 > > Attachments: image-2019-09-14-17-32-58-740.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > To reproduce the bug, create a FF with an attribute equals to Long.MAX, then > add 100 to that attribute in a following UpdateAttribute processor. The > property will overflow to a negative number without throwing any exception > !image-2019-09-14-17-32-58-740.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires
[ https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118679#comment-17118679 ] ASF subversion and git services commented on NIFI-7453: --- Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch refs/heads/MINIFI-422 from Tamas Palfy [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ] NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT NIFI-7453 Creating a new Kudu client when refreshing TGT in KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.) NIFI-7453 Safely closing old Kudu client before creating a new one. NIFI-7453 Visibility adjustment. This closes #4276. Signed-off-by: Peter Turcsanyi > PutKudu kerberos issue after TGT expires > - > > Key: NIFI-7453 > URL: https://issues.apache.org/jira/browse/NIFI-7453 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Major > Fix For: 1.12.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > When PutKudu is used with kerberos authentication, it stops working when the > TGT expires with the following logs/exceptions: > {noformat} > ERROR org.apache.nifi.processors.kudu.PutKudu: > PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row > error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, > server=null, status=Runtime error: cannot re-acquire authentication token > after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions > received: [org.apache.kudu.client.NonRecoverableException: server requires > authentication, but client does not have Kerberos credentials (tgt). > Authentication tokens were not used because this connection will be used to > acquire a new token and therefore requires primary credentials]) > 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable > to connect to master HOST:PORT: server requires authentication, but client > does not have Kerberos credentials (tgt). Authentication tokens were not used > because this connection will be used to acquire a new token and therefore > requires primary credentials > 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: > unexpected tablet lookup failure for operation KuduRpc(method=Write, > tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces) > org.apache.kudu.client.NonRecoverableException: cannot re-acquire > authentication token after 5 attempts (Couldn't find a valid master in > (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover > ableException: server requires authentication, but client does not have > Kerberos credentials (tgt). Authentication tokens were not used because this > connection will be used to acquire a new token and therefore requires primary > credentials]) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141) > at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280) > at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259) > at com.stumbleupon.async.Deferred.callback(Deferred.java:1002) > at > org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246) > ... > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7453) PutKudu kerberos issue after TGT expires
[ https://issues.apache.org/jira/browse/NIFI-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118676#comment-17118676 ] ASF subversion and git services commented on NIFI-7453: --- Commit ca65bba5d720550aab97fcfc58be46e1b77001d3 in nifi's branch refs/heads/MINIFI-422 from Tamas Palfy [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ca65bba ] NIFI-7453 In PutKudu creating a new Kudu client when refreshing TGT NIFI-7453 Creating a new Kudu client when refreshing TGT in KerberosPasswordUser as well. (Applied to KerberosKeytabUser only before.) NIFI-7453 Safely closing old Kudu client before creating a new one. NIFI-7453 Visibility adjustment. This closes #4276. Signed-off-by: Peter Turcsanyi > PutKudu kerberos issue after TGT expires > - > > Key: NIFI-7453 > URL: https://issues.apache.org/jira/browse/NIFI-7453 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Tamas Palfy >Assignee: Tamas Palfy >Priority: Major > Fix For: 1.12.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > When PutKudu is used with kerberos authentication, it stops working when the > TGT expires with the following logs/exceptions: > {noformat} > ERROR org.apache.nifi.processors.kudu.PutKudu: > PutKudu[id=4ad63284-cb39-1c78-bd0e-c280df797039] Failed to write due to Row > error for primary key="feebfe81-4ee6-4a8b-91ca-311e1c4f8749", tablet=null, > server=null, status=Runtime error: cannot re-acquire authentication token > after 5 attempts (Couldn't find a valid master in (HOST:PORT). Exceptions > received: [org.apache.kudu.client.NonRecoverableException: server requires > authentication, but client does not have Kerberos credentials (tgt). > Authentication tokens were not used because this connection will be used to > acquire a new token and therefore requires primary credentials]) > 2020-05-13 09:27:05,157 INFO org.apache.kudu.client.ConnectToCluster: Unable > to connect to master HOST:PORT: server requires authentication, but client > does not have Kerberos credentials (tgt). Authentication tokens were not used > because this connection will be used to acquire a new token and therefore > requires primary credentials > 2020-05-13 09:27:05,159 WARN org.apache.kudu.client.AsyncKuduSession: > unexpected tablet lookup failure for operation KuduRpc(method=Write, > tablet=null, attempt=0, DeadlineTracker(timeout=0, elapsed=15), No traces) > org.apache.kudu.client.NonRecoverableException: cannot re-acquire > authentication token after 5 attempts (Couldn't find a valid master in > (HOST:PORT). Exceptions received: [org.apache.kudu.client.NonRecover > ableException: server requires authentication, but client does not have > Kerberos credentials (tgt). Authentication tokens were not used because this > connection will be used to acquire a new token and therefore requires primary > credentials]) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:158) > at > org.apache.kudu.client.AuthnTokenReacquirer$1NewAuthnTokenErrB.call(AuthnTokenReacquirer.java:141) > at com.stumbleupon.async.Deferred.doCall(Deferred.java:1280) > at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1259) > at com.stumbleupon.async.Deferred.callback(Deferred.java:1002) > at > org.apache.kudu.client.ConnectToCluster.incrementCountAndCheckExhausted(ConnectToCluster.java:246) > ... > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7445) Add Conflict Resolution property to PutAzureDataLakeStorage processor
[ https://issues.apache.org/jira/browse/NIFI-7445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118683#comment-17118683 ] ASF subversion and git services commented on NIFI-7445: --- Commit 1dd0e920402d20917bf3bf421ce14ab3dc0749a5 in nifi's branch refs/heads/MINIFI-422 from Peter Gyori [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1dd0e92 ] NIFI-7445: Add Conflict Resolution property to PutAzureDataLakeStorage processor NIFI-7445: Add Conflict Resolution property to PutAzureDataLakeStorage processor Made warning and error messages more informative. Refactored flowFile assertion in the tests. This closes #4287. Signed-off-by: Peter Turcsanyi > Add Conflict Resolution property to PutAzureDataLakeStorage processor > - > > Key: NIFI-7445 > URL: https://issues.apache.org/jira/browse/NIFI-7445 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Peter Turcsanyi >Assignee: Peter Gyori >Priority: Major > Labels: azure > Fix For: 1.12.0 > > Time Spent: 40m > Remaining Estimate: 0h > > PutAzureDataLakeStorage currently overwrites existing files without error > (azure-storage-file-datalake 12.0.1). > Add Conflict Resolution property with values: fail (default), replace, ignore > (similar to PutFile). > DataLakeDirectoryClient.createFile(String fileName, boolean overwrite) can be > used (available from 12.1.x) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6785) CompressContent should support deflate/zlib compression
[ https://issues.apache.org/jira/browse/NIFI-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118674#comment-17118674 ] ASF subversion and git services commented on NIFI-6785: --- Commit 101387bfaac1397add03af0a6967cda393970eea in nifi's branch refs/heads/MINIFI-422 from adyoun2 [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=101387b ] NIFI-6785 Support Deflate Compression NIFI-6785 Remove unused imports This closes #3822 Signed-off-by: Mike Thomsen > CompressContent should support deflate/zlib compression > --- > > Key: NIFI-6785 > URL: https://issues.apache.org/jira/browse/NIFI-6785 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Adam >Assignee: Adam >Priority: Major > Fix For: 1.12.0 > > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6287) Add ability to hash an attribute or value in expression language
[ https://issues.apache.org/jira/browse/NIFI-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118665#comment-17118665 ] ASF subversion and git services commented on NIFI-6287: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Add ability to hash an attribute or value in expression language > > > Key: NIFI-6287 > URL: https://issues.apache.org/jira/browse/NIFI-6287 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Ed Jackson >Assignee: Phillip Grenier >Priority: Trivial > Labels: expression-language > Fix For: 1.12.0 > > > Similar to 6255 > > In expression language it would be very useful to hash arbitrary data or > attributes from the incoming flow file. For example, if the incoming flow > file has an attribute called 'serial_num', the user can hash this value in > expression language like `${hash('MD5', 'serial_num')}` or similar syntax. > > Today users need to add a CryptographicHashAttribute processor to accomplish > this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6673) Expression language multiply operation doesn't check for overflow
[ https://issues.apache.org/jira/browse/NIFI-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118670#comment-17118670 ] ASF subversion and git services commented on NIFI-6673: --- Commit 1ba8f76a44bdf6fb3a6af8c74e678dd7d21ad70b in nifi's branch refs/heads/MINIFI-422 from Alessandro D'Armiento [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1ba8f76 ] NIFI-6673 MultiplyEvaluator throws an Arithmetic Exception in case of Long overflow. TestQuery checks that Long overflow is detected and Double overflow is correctly promoted to POSITIVE_INFINITY or NEGATIVE_INFINITY The behaviour change is reverted until further investigations. The overflow behaviour is still enforced by unit tests and documented in the expression language doc This closes #3739 Signed-off-by: Mike Thomsen > Expression language multiply operation doesn't check for overflow > - > > Key: NIFI-6673 > URL: https://issues.apache.org/jira/browse/NIFI-6673 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.9.2 >Reporter: Alessandro D'Armiento >Priority: Major > Fix For: 1.12.0 > > Attachments: image-2019-09-14-17-38-19-397.png > > Time Spent: 20m > Remaining Estimate: 0h > > To reproduce the bug, create a FF with an attribute equals to Long.MAX, then > multiply it by 2 to that attribute in a following UpdateAttribute processor. > The property will overflow to a negative number without throwing any exception > !image-2019-09-14-17-38-19-397.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6672) Expression language plus operation doesn't check for overflow
[ https://issues.apache.org/jira/browse/NIFI-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118668#comment-17118668 ] ASF subversion and git services commented on NIFI-6672: --- Commit b0251178243650e3256a8b0649f0196c6868fcba in nifi's branch refs/heads/MINIFI-422 from Alessandro D'Armiento [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b025117 ] NIFI-6672 PlusEvaluator throws an Arithmetic Exception in case of Long overflow. TestQuery checks that Long overflow is detected and Double overflow is correctly promoted to POSITIVE_INFINITY The behaviour change is reverted until further investigations. The overflow behaviour is still enforced by unit tests and documented in the expression language doc NIFI-6672 Removed test code. This closes #3738 Signed-off-by: Mike Thomsen > Expression language plus operation doesn't check for overflow > - > > Key: NIFI-6672 > URL: https://issues.apache.org/jira/browse/NIFI-6672 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.9.2 >Reporter: Alessandro D'Armiento >Priority: Major > Fix For: 1.12.0 > > Attachments: image-2019-09-14-17-32-58-740.png > > Time Spent: 0.5h > Remaining Estimate: 0h > > To reproduce the bug, create a FF with an attribute equals to Long.MAX, then > add 100 to that attribute in a following UpdateAttribute processor. The > property will overflow to a negative number without throwing any exception > !image-2019-09-14-17-32-58-740.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6255) Allow NiFi to hash specific attributes of a record
[ https://issues.apache.org/jira/browse/NIFI-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118664#comment-17118664 ] ASF subversion and git services commented on NIFI-6255: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Allow NiFi to hash specific attributes of a record > -- > > Key: NIFI-6255 > URL: https://issues.apache.org/jira/browse/NIFI-6255 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Nathan Bruce >Assignee: Phillip Grenier >Priority: Trivial > Fix For: 1.12.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Create a processor for NiFi to read in a file of records through the Record > Reader service and hash specific keys within the record. > The processor will accept a comma delimited list of strings that would > specify the keys to be hashed, as an attribute. If the keys specified are not > present in a given record, the processor should continue without resulting in > a failure. > The processor will have a list of hashing algorithms to be applied to the > record, as well as an optional salt. > The hashed value will replace the current value stored for the given key then > passed on to the Record Writer service. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6785) CompressContent should support deflate/zlib compression
[ https://issues.apache.org/jira/browse/NIFI-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118675#comment-17118675 ] ASF subversion and git services commented on NIFI-6785: --- Commit 101387bfaac1397add03af0a6967cda393970eea in nifi's branch refs/heads/MINIFI-422 from adyoun2 [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=101387b ] NIFI-6785 Support Deflate Compression NIFI-6785 Remove unused imports This closes #3822 Signed-off-by: Mike Thomsen > CompressContent should support deflate/zlib compression > --- > > Key: NIFI-6785 > URL: https://issues.apache.org/jira/browse/NIFI-6785 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.9.2 >Reporter: Adam >Assignee: Adam >Priority: Major > Fix For: 1.12.0 > > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7211) TestAttributeRollingWindow unreliable on slow build systems
[ https://issues.apache.org/jira/browse/NIFI-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118672#comment-17118672 ] ASF subversion and git services commented on NIFI-7211: --- Commit 443f969d3626ac2791f4095de7dc59de7a9733e7 in nifi's branch refs/heads/MINIFI-422 from Mike Thomsen [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=443f969 ] NIFI-7211 Added @Ignore with warning message to a test that randomly fails due to timing issues. This closes #4296 > TestAttributeRollingWindow unreliable on slow build systems > --- > > Key: NIFI-7211 > URL: https://issues.apache.org/jira/browse/NIFI-7211 > Project: Apache NiFi > Issue Type: Test >Reporter: Joe Witt >Assignee: Mike Thomsen >Priority: Major > Fix For: 1.12.0 > > Time Spent: 50m > Remaining Estimate: 0h > > [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.814 > s <<< FAILURE! - in > org.apache.nifi.processors.stateful.analysis.TestAttributeRollingWindow > [ERROR] > testMicroBatching(org.apache.nifi.processors.stateful.analysis.TestAttributeRollingWindow) > Time elapsed: 1.037 s <<< FAILURE! > org.junit.ComparisonFailure: expected:<[8].0> but was:<[4].0> > at > org.apache.nifi.processors.stateful.analysis.TestAttributeRollingWindow.testMicroBatching(TestAttributeRollingWindow.java:247) > [ERROR] Failures: > [ERROR] TestAttributeRollingWindow.testMicroBatching:247 expected:<[8].0> > but was:<[4].0> > [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6255) Allow NiFi to hash specific attributes of a record
[ https://issues.apache.org/jira/browse/NIFI-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118666#comment-17118666 ] ASF subversion and git services commented on NIFI-6255: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Allow NiFi to hash specific attributes of a record > -- > > Key: NIFI-6255 > URL: https://issues.apache.org/jira/browse/NIFI-6255 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Nathan Bruce >Assignee: Phillip Grenier >Priority: Trivial > Fix For: 1.12.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Create a processor for NiFi to read in a file of records through the Record > Reader service and hash specific keys within the record. > The processor will accept a comma delimited list of strings that would > specify the keys to be hashed, as an attribute. If the keys specified are not > present in a given record, the processor should continue without resulting in > a failure. > The processor will have a list of hashing algorithms to be applied to the > record, as well as an optional salt. > The hashed value will replace the current value stored for the given key then > passed on to the Record Writer service. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6287) Add ability to hash an attribute or value in expression language
[ https://issues.apache.org/jira/browse/NIFI-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118663#comment-17118663 ] ASF subversion and git services commented on NIFI-6287: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Add ability to hash an attribute or value in expression language > > > Key: NIFI-6287 > URL: https://issues.apache.org/jira/browse/NIFI-6287 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Ed Jackson >Assignee: Phillip Grenier >Priority: Trivial > Labels: expression-language > Fix For: 1.12.0 > > > Similar to 6255 > > In expression language it would be very useful to hash arbitrary data or > attributes from the incoming flow file. For example, if the incoming flow > file has an attribute called 'serial_num', the user can hash this value in > expression language like `${hash('MD5', 'serial_num')}` or similar syntax. > > Today users need to add a CryptographicHashAttribute processor to accomplish > this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6255) Allow NiFi to hash specific attributes of a record
[ https://issues.apache.org/jira/browse/NIFI-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118660#comment-17118660 ] ASF subversion and git services commented on NIFI-6255: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Allow NiFi to hash specific attributes of a record > -- > > Key: NIFI-6255 > URL: https://issues.apache.org/jira/browse/NIFI-6255 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Nathan Bruce >Assignee: Phillip Grenier >Priority: Trivial > Fix For: 1.12.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Create a processor for NiFi to read in a file of records through the Record > Reader service and hash specific keys within the record. > The processor will accept a comma delimited list of strings that would > specify the keys to be hashed, as an attribute. If the keys specified are not > present in a given record, the processor should continue without resulting in > a failure. > The processor will have a list of hashing algorithms to be applied to the > record, as well as an optional salt. > The hashed value will replace the current value stored for the given key then > passed on to the Record Writer service. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6287) Add ability to hash an attribute or value in expression language
[ https://issues.apache.org/jira/browse/NIFI-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118661#comment-17118661 ] ASF subversion and git services commented on NIFI-6287: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Add ability to hash an attribute or value in expression language > > > Key: NIFI-6287 > URL: https://issues.apache.org/jira/browse/NIFI-6287 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Ed Jackson >Assignee: Phillip Grenier >Priority: Trivial > Labels: expression-language > Fix For: 1.12.0 > > > Similar to 6255 > > In expression language it would be very useful to hash arbitrary data or > attributes from the incoming flow file. For example, if the incoming flow > file has an attribute called 'serial_num', the user can hash this value in > expression language like `${hash('MD5', 'serial_num')}` or similar syntax. > > Today users need to add a CryptographicHashAttribute processor to accomplish > this. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6674) Expression language minus operation doesn't check for overflow
[ https://issues.apache.org/jira/browse/NIFI-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118671#comment-17118671 ] ASF subversion and git services commented on NIFI-6674: --- Commit 788f8b0389f989436c937214b9d01a901bac2f06 in nifi's branch refs/heads/MINIFI-422 from Alessandro D'Armiento [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=788f8b0 ] NIFI-6674 MinusEvaluator throws an Arithmetic Exception in case of Long overflow. TestQuery checks that Long overflow is detected and Double overflow is correctly promoted to NEGATIVE_INFINITY MinusEvaluator throws an Arithmetic Exception in case of Long overflow. TestQuery checks that Long overflow is detected and Double overflow is correctly promoted to NEGATIVE_INFINITY The behaviour change is reverted until further investigations. The overflow behaviour is still enforced by unit tests and documented in the expression language doc fixed mispositioned # in doc This closes #3740 Signed-off-by: Mike Thomsen > Expression language minus operation doesn't check for overflow > -- > > Key: NIFI-6674 > URL: https://issues.apache.org/jira/browse/NIFI-6674 > Project: Apache NiFi > Issue Type: Bug >Reporter: Alessandro D'Armiento >Priority: Major > Fix For: 1.12.0 > > Attachments: image-2019-09-14-17-51-41-809.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > To reproduce the bug, create a FF with an attribute equals to Long.MIN, then > subtract 100 to that attribute in a following UpdateAttribute processor. The > property will overflow to a positive number without throwing any exception > !image-2019-09-14-17-51-41-809.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-6255) Allow NiFi to hash specific attributes of a record
[ https://issues.apache.org/jira/browse/NIFI-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118662#comment-17118662 ] ASF subversion and git services commented on NIFI-6255: --- Commit 0f4b79b55ec7e4a85334d4a0d3e7200021950d1a in nifi's branch refs/heads/MINIFI-422 from Phillip Grenier [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f4b79b ] NIFI-6255 NIFI-6287: Hash function for expression language and record path. NIFI-6255 NIFI-6287: Rebased to match the new expression language interface NIFI-6255 NIFI-6287: Fix wildcard imports and unused imports NIFI-6255 NIFI-6287: Move to the common codec DigetUtils Update commons-codec This closes #3624 Signed-off-by: Mike Thomsen > Allow NiFi to hash specific attributes of a record > -- > > Key: NIFI-6255 > URL: https://issues.apache.org/jira/browse/NIFI-6255 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Nathan Bruce >Assignee: Phillip Grenier >Priority: Trivial > Fix For: 1.12.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Create a processor for NiFi to read in a file of records through the Record > Reader service and hash specific keys within the record. > The processor will accept a comma delimited list of strings that would > specify the keys to be hashed, as an attribute. If the keys specified are not > present in a given record, the processor should continue without resulting in > a failure. > The processor will have a list of hashing algorithms to be applied to the > record, as well as an optional salt. > The hashed value will replace the current value stored for the given key then > passed on to the Record Writer service. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6970) Add DistributeRecord processor for distribute data by key hash
[ https://issues.apache.org/jira/browse/NIFI-6970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kovalev updated NIFI-6970: --- Attachment: cluster_distribution.png > Add DistributeRecord processor for distribute data by key hash > -- > > Key: NIFI-6970 > URL: https://issues.apache.org/jira/browse/NIFI-6970 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.10.0 >Reporter: Ilya Kovalev >Priority: Minor > Attachments: cluster_distribution.png > > Time Spent: 40m > Remaining Estimate: 0h > > Necessary to add Processor for distribute data over user specified > relationships by distribution key/keys. Data is distributed across > relationships in the amount proportional to the relationship weight. For > example, if there are two relationships and the first has a weight of 9 while > the second has a weight of 10, the first will be sent 9 / 19 parts of the > rows, and the second will be sent 10 / 19. > The row will be sent to the relationship that corresponds to the > half-interval of the remainders from 'prev_weight' to 'prev_weights + > weight', where 'prev_weights' is the total weight of the relationships with > the smallest number, and 'weight' is the weight of this relationship." For > example, if there are two relationships, and the first has a weight of 9 > while the second has a weight of 10, the row will be sent to the first > relationship for the remainders from the range [0, 9), and to the second for > the remainders from the range [9, 19). > > It will help for loading data to distributed databases like clickhouse > [https://clickhouse.tech/docs/en/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (NIFI-6970) Add DistributeRecord processor for distribute data by key hash
[ https://issues.apache.org/jira/browse/NIFI-6970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kovalev updated NIFI-6970: --- Description: Necessary to add Processor for distribute data over user specified relationships by distribution key/keys. Data is distributed across relationships in the amount proportional to the relationship weight. For example, if there are two relationships and the first has a weight of 9 while the second has a weight of 10, the first will be sent 9 / 19 parts of the rows, and the second will be sent 10 / 19. The row will be sent to the relationship that corresponds to the half-interval of the remainders from 'prev_weight' to 'prev_weights + weight', where 'prev_weights' is the total weight of the relationships with the smallest number, and 'weight' is the weight of this relationship." For example, if there are two relationships, and the first has a weight of 9 while the second has a weight of 10, the row will be sent to the first relationship for the remainders from the range [0, 9), and to the second for the remainders from the range [9, 19). It will help for loading data to distributed databases like clickhouse [https://clickhouse.tech/docs/en/] was: Necessary to add Processor for distribute data over user specified relationships by distribution key/keys. Data is distributed across relationships in the amount proportional to the relationship weight. For example, if there are two relationships and the first has a weight of 9 while the second has a weight of 10, the first will be sent 9 / 19 parts of the rows, and the second will be sent 10 / 19. The row will be sent to the relationship that corresponds to the half-interval of the remainders from 'prev_weight' to 'prev_weights + weight', where 'prev_weights' is the total weight of the relationships with the smallest number, and 'weight' is the weight of this relationship." For example, if there are two relationships, and the first has a weight of 9 while the second has a weight of 10, the row will be sent to the first relationship for the remainders from the range [0, 9), and to the second for the remainders from the range [9, 19). > Add DistributeRecord processor for distribute data by key hash > -- > > Key: NIFI-6970 > URL: https://issues.apache.org/jira/browse/NIFI-6970 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.10.0 >Reporter: Ilya Kovalev >Priority: Minor > Attachments: cluster_distribution.png > > Time Spent: 40m > Remaining Estimate: 0h > > Necessary to add Processor for distribute data over user specified > relationships by distribution key/keys. Data is distributed across > relationships in the amount proportional to the relationship weight. For > example, if there are two relationships and the first has a weight of 9 while > the second has a weight of 10, the first will be sent 9 / 19 parts of the > rows, and the second will be sent 10 / 19. > The row will be sent to the relationship that corresponds to the > half-interval of the remainders from 'prev_weight' to 'prev_weights + > weight', where 'prev_weights' is the total weight of the relationships with > the smallest number, and 'weight' is the weight of this relationship." For > example, if there are two relationships, and the first has a weight of 9 > while the second has a weight of 10, the row will be sent to the first > relationship for the remainders from the range [0, 9), and to the second for > the remainders from the range [9, 19). > > It will help for loading data to distributed databases like clickhouse > [https://clickhouse.tech/docs/en/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (NIFI-7487) Improve ModifyBytes processor performance (add SupportsBatching annotation)
[ https://issues.apache.org/jira/browse/NIFI-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118540#comment-17118540 ] r65535 commented on NIFI-7487: -- Pull request raised to add batch support > Improve ModifyBytes processor performance (add SupportsBatching annotation) > --- > > Key: NIFI-7487 > URL: https://issues.apache.org/jira/browse/NIFI-7487 > Project: Apache NiFi > Issue Type: Wish >Reporter: Marek Kovar >Assignee: r65535 >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-7487) Improve ModifyBytes processor performance (add SupportsBatching annotation)
[ https://issues.apache.org/jira/browse/NIFI-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] r65535 reassigned NIFI-7487: Assignee: r65535 > Improve ModifyBytes processor performance (add SupportsBatching annotation) > --- > > Key: NIFI-7487 > URL: https://issues.apache.org/jira/browse/NIFI-7487 > Project: Apache NiFi > Issue Type: Wish >Reporter: Marek Kovar >Assignee: r65535 >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (MINIFICPP-1242) Fix Windows event log iteration
Arpad Boda created MINIFICPP-1242: - Summary: Fix Windows event log iteration Key: MINIFICPP-1242 URL: https://issues.apache.org/jira/browse/MINIFICPP-1242 Project: Apache NiFi MiNiFi C++ Issue Type: Bug Affects Versions: 0.7.0 Reporter: Arpad Boda Assignee: Arpad Boda Fix For: 0.8.0 There are some errors in Windows event log iteration: -No reasonable timeout specified -Errors are not handled/logged properly -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (NIFI-7492) Component reference does not change when changing service
Jens M Kofoed created NIFI-7492: --- Summary: Component reference does not change when changing service Key: NIFI-7492 URL: https://issues.apache.org/jira/browse/NIFI-7492 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.11.4 Reporter: Jens M Kofoed In my flow I have a ConvertRecord process configured to use a XMLReader and a XMLWriter. I created a copy of the process and pasted to the canvas. Changing the new processor to use a JsonRecordSetWriter instead of the XMLWriter. Now, every time I want to make changes to either the XMLWriter or JsonRecordSetWriter both of them saying they have 2 referencing components. And to be able to make change, both processers will have to stop. I've seen something similar where I change a service in a process to a new service. The old service still saying it has a reference to a component. Also when using parameters. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (NIFI-7392) Create a ValidateJSON Processor
[ https://issues.apache.org/jira/browse/NIFI-7392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] r65535 reassigned NIFI-7392: Assignee: r65535 > Create a ValidateJSON Processor > --- > > Key: NIFI-7392 > URL: https://issues.apache.org/jira/browse/NIFI-7392 > Project: Apache NiFi > Issue Type: Improvement >Reporter: r65535 >Assignee: r65535 >Priority: Minor > Time Spent: 40m > Remaining Estimate: 0h > > I've written a ValidateJson processor which uses JSON schemas to validate > FlowFiles (spec here: [http://json-schema.org/]) > This processor ensures the content isn't modified - and only routed. The > ValidateRecord processor rewrites the content. > The JSON schema spec provide several nice to have features that avro schemas > don't support - like pattern matching values. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [nifi-minifi-cpp] hunyadi-dev commented on a change in pull request #795: MINIFICPP-1236 - GetFile processor's \"Input Directory\" property sho…
hunyadi-dev commented on a change in pull request #795: URL: https://github.com/apache/nifi-minifi-cpp/pull/795#discussion_r431617605 ## File path: extensions/standard-processors/processors/GetFile.cpp ## @@ -146,6 +146,13 @@ void GetFile::onSchedule(core::ProcessContext *context, core::ProcessSessionFact if (context->getProperty(FileFilter.getName(), value)) { request_.fileFilter = value; } + + if (!context->getProperty(Directory.getName(), value)) { +throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Input Directory property is missing"); + } + if (!utils::file::FileUtils::is_directory(value.c_str())) { +throw Exception(PROCESS_SCHEDULE_EXCEPTION, "Input Directory \"" + value + "\" does not exist"); Review comment: Minor, but "not a directory" does not imply "does not exist". This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org