[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user PuspenduBanerjee commented on the pull request: https://github.com/apache/nifi/pull/218#issuecomment-195231594 @apiri For now I have no way to check Windows or Cygwin as I do not have any MS windows installation, total linux guy and on vacation to my homeland, so can't even borrow a windows PC from a colleague. . Please help to find out someone with Windows. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: Establishment of MiNiFi repo and supporting tools
All, Didn't explicitly mention this, but was aiming for a lazy consensus [1] to continuing on with the outlined procedure. If no one objects in the next two days, will look at carrying out the prescribed items listed. [1] http://www.apache.org/foundation/voting.html#LazyConsensus On Wed, Mar 9, 2016 at 3:28 PM, Aldrin Piriwrote: > NiFi Community, > > Originally discussed in January [1], the MiNiFi agent model was met with > positive feedback. I would like to propose a concerted effort toward the > execution on the ideas presented and establish a basis for incorporation of > the feedback received from, and collaboration with, the community to move > toward our goals of helping with dataflow from the point of its origin. > > To that end, I would like to propose the creation of: > >- > >a separate repository (nifi-minifi), >- > >establishment of a MiNiFi JIRA (MINIFI), and >- > >production of an associated feature proposals and design documentation >within our Confluence Wiki spaces beyond the initial points outlined by Joe >with some additional proposals architecture and roadmap > > The separate JIRA and Git repo map to the existing ASF infrastructure for > projects with similar efforts and will aid in a cleaner release and issue > management process. > > Central to the aims of MiNiFi, the tenets of its operation and execution > of dataflow are the same as NiFi itself: security, provenance, and > management of dataflow; helping bring information to NiFi while maintaining > the full extent of its provenance. > > Some clarifying points based on the discussion that existed previously: > >- > >While there may be reuse of NiFi components and some overlap, MiNiFi >is a separate effort that is complementary to but not necessarily directly >compatible with existing components and extensions. Obviously there has >been a lot of great effort which we can reuse, but in striving to be a >smaller footprint, we should not find ourselves beholden to the existing, >core, NiFi architecture. >- > >There will exist scenarios where there is an inherent need to go >smaller and closer to the source system. This will take the form of native >code that builds upon the same efforts and items originally developed under >the Java ecosystem. >- > >Design should take consideration of disparate execution environments >and provide ways to robustly handle varying means of communication and >exchange. Accordingly, communications both in management of agents and the >transference of data should be neutral to technologies, providing the same >flexibility and adaptation that allows NiFi to communicate with a wide >breadth of systems, protocols, schemas, and formats. > > > [1] > http://apache-nifi-developer-list.39713.n7.nabble.com/DISCUSS-Proposal-for-an-Apache-NiFi-sub-project-MiNiFi-td6141.html >
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on the pull request: https://github.com/apache/nifi/pull/218#issuecomment-195209278 @PuspenduBanerjee going to need to investigate a couple of things further. With JAVA_HOME set, operation worked as anticipated for both OS X and Linux. Windows had issues with the path declaration. Quoted those items, but had issues with establishing NIFI_ROOT. Cygwin also seemed to have --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi/pull/218#discussion_r55791948 --- Diff: nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/RunNiFi.java --- @@ -546,6 +552,31 @@ public void status() throws IOException { } } +public void env(){ +final Logger logger = cmdLogger; +final Status status = getStatus(logger); +if(status.getPid() == null){ +logger.info("Apache NiFi is not running"); +return; +} +try{ +final Class virtualMachineClass=Class.forName("com.sun.tools.attach.VirtualMachine"); +Method attachMethod=virtualMachineClass.getMethod("attach", String.class); +Object virtualMachine=attachMethod.invoke(null, status.getPid()); +Method getSystemPropertiesMethod=virtualMachine.getClass().getMethod("getSystemProperties"); +final Properties sysProps=(Properties)getSystemPropertiesMethod.invoke(virtualMachine); +for(Entry
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi/pull/218#discussion_r55791913 --- Diff: nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/RunNiFi.java --- @@ -546,6 +552,31 @@ public void status() throws IOException { } } +public void env(){ +final Logger logger = cmdLogger; +final Status status = getStatus(logger); +if(status.getPid() == null){ +logger.info("Apache NiFi is not running"); +return; +} +try{ +final Class virtualMachineClass=Class.forName("com.sun.tools.attach.VirtualMachine"); +Method attachMethod=virtualMachineClass.getMethod("attach", String.class); +Object virtualMachine=attachMethod.invoke(null, status.getPid()); +Method getSystemPropertiesMethod=virtualMachine.getClass().getMethod("getSystemProperties"); +final Properties sysProps=(Properties)getSystemPropertiesMethod.invoke(virtualMachine); +for(Entry
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi/pull/218#discussion_r55791387 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/env-nifi.bat --- @@ -0,0 +1,58 @@ +@echo off +rem +remLicensed to the Apache Software Foundation (ASF) under one or more +remcontributor license agreements. See the NOTICE file distributed with +remthis work for additional information regarding copyright ownership. +remThe ASF licenses this file to You under the Apache License, Version 2.0 +rem(the "License"); you may not use this file except in compliance with +remthe License. You may obtain a copy of the License at +rem +rem http://www.apache.org/licenses/LICENSE-2.0 +rem +remUnless required by applicable law or agreed to in writing, software +remdistributed under the License is distributed on an "AS IS" BASIS, +remWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +remSee the License for the specific language governing permissions and +remlimitations under the License. +rem + +rem Use JAVA_HOME if it's set; otherwise, just use java + +if "%JAVA_HOME%" == "" goto noJavaHome +if not exist "%JAVA_HOME%\bin\java.exe" goto noJavaHome +set JAVA_EXE=%JAVA_HOME%\bin\java.exe --- End diff -- Have to be careful with JAVA_HOME throughout as it is highly likely folks using Windows will have this located in a default location with spaces. This caused troubles at points throughout. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi/pull/218#discussion_r55790712 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/nifi.sh --- @@ -118,6 +118,12 @@ locateJava() { fi fi fi +[ "x${TOOLS_JAR}" = "x" ] && [ -n "${JAVA_HOME}" ] && TOOLS_JAR=$(find "${JAVA_HOME}" -name "tools.jar") --- End diff -- This was okay on OS X where my Java home was absolute, but not okay on Linux where this was a symlink. Quick win here would be to use the -H of find --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi/pull/218#discussion_r55790433 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/nifi.sh --- @@ -118,6 +118,12 @@ locateJava() { fi fi fi +[ "x${TOOLS_JAR}" = "x" ] && [ -n "${JAVA_HOME}" ] && TOOLS_JAR=$(find "${JAVA_HOME}" -name "tools.jar") +[ "x${TOOLS_JAR}" = "x" ] && TOOLS_JAR=$(find "${JAVA_HOME}" -name "classes.jar") +if ["x${TOOLS_JAR}" = "x" ]; then --- End diff -- I think my preference would be to opt to only display this if we are using the env command. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Reg: starting and Stopping processor
Hi Team, Is there a way to start and stop processor using some rest api or command line ? If yes, please provide the steps. Regards, Sourav Gulati NOTE: This message may contain information that is confidential, proprietary, privileged or otherwise protected by law. The message is intended solely for the named addressee. If received in error, please destroy and notify the sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this communication has been maintained nor that the communication is free of errors, virus, interception or interference.
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi/pull/218#discussion_r55789451 --- Diff: nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/RunNiFi.java --- @@ -546,6 +552,31 @@ public void status() throws IOException { } } +public void env(){ +final Logger logger = cmdLogger; +final Status status = getStatus(logger); +if(status.getPid() == null){ +logger.info("Apache NiFi is not running"); +return; +} +try{ +final Class virtualMachineClass=Class.forName("com.sun.tools.attach.VirtualMachine"); --- End diff -- Formatting: try block should be indented --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on a diff in the pull request: https://github.com/apache/nifi/pull/218#discussion_r55789241 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/bin/nifi.sh --- @@ -118,6 +118,12 @@ locateJava() { fi fi fi +[ "x${TOOLS_JAR}" = "x" ] && [ -n "${JAVA_HOME}" ] && TOOLS_JAR=$(find "${JAVA_HOME}" -name "tools.jar") +[ "x${TOOLS_JAR}" = "x" ] && TOOLS_JAR=$(find "${JAVA_HOME}" -name "classes.jar") +if ["x${TOOLS_JAR}" = "x" ]; then --- End diff -- missed a space after your [ in your if statement. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1495 - AWS Kinesis Firehose
Github user apiri commented on the pull request: https://github.com/apache/nifi/pull/213#issuecomment-195173926 @mans2singh Yep. I think we can probably replace the number of items in a batch size (or make this secondary to the buffer size). The idea would be to continuously grab FlowFiles until we've reached this threshold and then when the next item arrives that would put us over that limit, we transfer it back to the incoming queue session.transfer(flowFile) **note the lack of relationship**. We can then create the batch much the same way you have done now. Does that make sense? Thanks for your work on this (and all the other great AWS stuff, very popular extensions), and apologies for the lag on following up again on this issue. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1495 - AWS Kinesis Firehose
Github user mans2singh commented on the pull request: https://github.com/apache/nifi/pull/213#issuecomment-195172976 @apiri Your recommendation for using buffer size is great. So, should we add another property - total batch bytes, and then send them in batches of that size ? If you have any other suggestion/pointers on how this can be implemented, please let me know and I will work on it. Thanks again for your time. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1491 Throws exception when unable to delet...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/227 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: How to get data from iPhone
I think one option would be to use a cloud sharing service (Box, DropBox, Google Drive, AWS, etc.) and then use an InvokeHTTP processor to retrieve resources from these endpoints. Are the phones physically connected to a computer? There are obviously other avenues to explore but the question is fairly vague. Andy LoPresto alopresto.apa...@gmail.com PGP Fingerprint: 70EC B3E5 98A6 5A3F D3C4 BACE 3C6E F65B 2F7D EF69 > On Mar 10, 2016, at 11:09 AM, ambaricloudwrote: > > Hi > I need to transfer data(images) from iPhones to HDFS using NIFI. > > Any suggestions > Satya > ambariCloud > > > > -- > View this message in context: > http://apache-nifi-developer-list.39713.n7.nabble.com/How-to-get-data-from-iPhone-tp7899.html > Sent from the Apache NiFi Developer List mailing list archive at Nabble.com. signature.asc Description: Message signed with OpenPGP using GPGMail
[GitHub] nifi pull request: NIFI-1518 InferAvroSchema note has an option to...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/235 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user alopresto commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55774945 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +public boolean handleEvent(ValidationEvent event) { +return false;
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user alopresto commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55774678 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +public boolean handleEvent(ValidationEvent event) { +return false;
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user alopresto commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55773583 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +public boolean handleEvent(ValidationEvent event) { +return false;
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user alopresto commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55773241 --- Diff: nifi-nar-bundles/nifi-iaa-providers-bundle/nifi-file-identity-provider/src/main/java/org/apache/nifi/authentication/file/FileIdentityProvider.java --- @@ -0,0 +1,216 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.authentication.file; + +import java.io.File; +import java.io.FileNotFoundException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import javax.xml.XMLConstants; +import javax.xml.bind.JAXBContext; +import javax.xml.bind.JAXBElement; +import javax.xml.bind.JAXBException; +import javax.xml.bind.Unmarshaller; +import javax.xml.bind.ValidationEvent; +import javax.xml.bind.ValidationEventHandler; +import javax.xml.transform.stream.StreamSource; +import javax.xml.validation.Schema; +import javax.xml.validation.SchemaFactory; + +import org.apache.nifi.authentication.AuthenticationResponse; +import org.apache.nifi.authentication.LoginCredentials; +import org.apache.nifi.authentication.LoginIdentityProvider; +import org.apache.nifi.authentication.LoginIdentityProviderConfigurationContext; +import org.apache.nifi.authentication.LoginIdentityProviderInitializationContext; +import org.apache.nifi.authentication.exception.IdentityAccessException; +import org.apache.nifi.authentication.exception.InvalidLoginCredentialsException; +import org.apache.nifi.authorization.exception.ProviderCreationException; +import org.apache.nifi.authorization.exception.ProviderDestructionException; +import org.apache.nifi.authentication.file.generated.UserCredentials; +import org.apache.nifi.authentication.file.generated.UserCredentialsList; +import org.apache.nifi.util.FormatUtils; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + + +/** + * Identity provider for simple username/password authentication backed by a local credentials file. The credentials + * file contains usernames and password hashes in bcrypt format. Any compatible bcrypt "2a" implementation may be used + * to populate the credentials file. + * + * The XML format of the credentials file is as follows: + * + * {@code + * + * + * + * + * + * + * } + * + */ +public class FileIdentityProvider implements LoginIdentityProvider { + +static final String PROPERTY_CREDENTIALS_FILE = "Credentials File"; +static final String PROPERTY_EXPIRATION_PERIOD = "Authentication Expiration"; + +private static final Logger logger = LoggerFactory.getLogger(FileIdentityProvider.class); +private static final String CREDENTIALS_XSD = "/credentials.xsd"; +private static final String JAXB_GENERATED_PATH = "org.apache.nifi.authentication.file.generated"; +private static final JAXBContext JAXB_CONTEXT = initializeJaxbContext(); + +private String issuer; +private long expirationPeriodMilliseconds; +private String credentialsFilePath; +private PasswordEncoder passwordEncoder = new BCryptPasswordEncoder(); +private String identifier; + +private static JAXBContext initializeJaxbContext() { +try { +return JAXBContext.newInstance(JAXB_GENERATED_PATH, FileIdentityProvider.class.getClassLoader()); +} catch (JAXBException e) { +throw new RuntimeException("Failed creating JAXBContext for " + FileIdentityProvider.class.getCanonicalName()); +} +} + +private static ValidationEventHandler defaultValidationEventHandler = new ValidationEventHandler() { +@Override +public boolean handleEvent(ValidationEvent event) { +return false;
[GitHub] nifi pull request: NIFI-1614 File Identity Provider implementation
Github user alopresto commented on a diff in the pull request: https://github.com/apache/nifi/pull/267#discussion_r55772695 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/login-identity-providers.xml --- @@ -89,4 +89,28 @@ 12 hours To enable the ldap-provider remove 2 lines. This is 2 of 2. --> + + + --- End diff -- Is this actually line 2 of 2? If not, where is the other line to be removed? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1518 InferAvroSchema note has an option to...
Github user trkurc commented on the pull request: https://github.com/apache/nifi/pull/235#issuecomment-195092832 @JPercivall - code looks good, just need to contrib-check, do some quick manual testing and merge --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-627 removed flowfile penalization which co...
GitHub user mosermw opened a pull request: https://github.com/apache/nifi/pull/268 NIFI-627 removed flowfile penalization which could skew behavior whe⦠â¦n processor's Time Duration was less than Penalty Duration, improved over throttle penalization NIFI-990 corrected failure path NIFI-1329 refactored using FlowFileFilter to avoid repeatedly returning flowfiles to the input queue, producing misleading stats and excessive Tasks/Time used You can merge this pull request into a Git repository by running: $ git pull https://github.com/mosermw/nifi NIFI-627 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/268.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #268 commit 6296b41121dee617b7eeb7ec7f9accd05042d4e6 Author: Mike MoserDate: 2016-03-10T22:53:25Z NIFI-627 removed flowfile penalization which could skew behavior when processor's Time Duration was less than Penalty Duration, improved over throttle penalization NIFI-990 corrected failure path NIFI-1329 refactored using FlowFileFilter to avoid repeatedly returning flowfiles to the input queue, producing misleading stats and excessive Tasks/Time used --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1575: Add QueryDatabaseTable processor
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/261#discussion_r55762812 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java --- @@ -0,0 +1,607 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.dbcp.DBCPService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ProcessorLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.OutputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.util.JdbcCommon; +import org.apache.nifi.util.LongHolder; +import org.apache.nifi.util.StopWatch; + +import java.io.IOException; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.text.DecimalFormat; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import static java.sql.Types.ARRAY; +import static java.sql.Types.BIGINT; +import static java.sql.Types.BINARY; +import static java.sql.Types.BIT; +import static java.sql.Types.BLOB; +import static java.sql.Types.BOOLEAN; +import static java.sql.Types.CHAR; +import static java.sql.Types.CLOB; +import static java.sql.Types.DATE; +import static java.sql.Types.DECIMAL; +import static java.sql.Types.DOUBLE; +import static java.sql.Types.FLOAT; +import static java.sql.Types.INTEGER; +import static java.sql.Types.LONGNVARCHAR; +import static java.sql.Types.LONGVARBINARY; +import static java.sql.Types.LONGVARCHAR; +import static java.sql.Types.NCHAR; +import static java.sql.Types.NUMERIC; +import static java.sql.Types.NVARCHAR; +import static java.sql.Types.REAL; +import static java.sql.Types.ROWID; +import static java.sql.Types.SMALLINT; +import static java.sql.Types.TIME; +import static java.sql.Types.TIMESTAMP; +import static java.sql.Types.TINYINT; +import static java.sql.Types.VARBINARY; +import static java.sql.Types.VARCHAR; + +@EventDriven +@InputRequirement(Requirement.INPUT_ALLOWED) +@Tags({"sql", "select", "jdbc", "query", "database"}) +@CapabilityDescription("Execute provided SQL select query. Query result will be converted to Avro format." ++ " Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on " ++ "a timer, or cron expression,
[GitHub] nifi pull request: NIFI-1575: Add QueryDatabaseTable processor
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/261#discussion_r55762688 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java --- @@ -0,0 +1,607 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.dbcp.DBCPService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ProcessorLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.OutputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.util.JdbcCommon; +import org.apache.nifi.util.LongHolder; +import org.apache.nifi.util.StopWatch; + +import java.io.IOException; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.text.DecimalFormat; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import static java.sql.Types.ARRAY; +import static java.sql.Types.BIGINT; +import static java.sql.Types.BINARY; +import static java.sql.Types.BIT; +import static java.sql.Types.BLOB; +import static java.sql.Types.BOOLEAN; +import static java.sql.Types.CHAR; +import static java.sql.Types.CLOB; +import static java.sql.Types.DATE; +import static java.sql.Types.DECIMAL; +import static java.sql.Types.DOUBLE; +import static java.sql.Types.FLOAT; +import static java.sql.Types.INTEGER; +import static java.sql.Types.LONGNVARCHAR; +import static java.sql.Types.LONGVARBINARY; +import static java.sql.Types.LONGVARCHAR; +import static java.sql.Types.NCHAR; +import static java.sql.Types.NUMERIC; +import static java.sql.Types.NVARCHAR; +import static java.sql.Types.REAL; +import static java.sql.Types.ROWID; +import static java.sql.Types.SMALLINT; +import static java.sql.Types.TIME; +import static java.sql.Types.TIMESTAMP; +import static java.sql.Types.TINYINT; +import static java.sql.Types.VARBINARY; +import static java.sql.Types.VARCHAR; + +@EventDriven +@InputRequirement(Requirement.INPUT_ALLOWED) +@Tags({"sql", "select", "jdbc", "query", "database"}) +@CapabilityDescription("Execute provided SQL select query. Query result will be converted to Avro format." ++ " Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on " ++ "a timer, or cron expression,
[GitHub] nifi pull request: NIFI-1575: Add QueryDatabaseTable processor
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/261#discussion_r55762510 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java --- @@ -0,0 +1,607 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.dbcp.DBCPService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ProcessorLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.OutputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.util.JdbcCommon; +import org.apache.nifi.util.LongHolder; +import org.apache.nifi.util.StopWatch; + +import java.io.IOException; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.text.DecimalFormat; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import static java.sql.Types.ARRAY; +import static java.sql.Types.BIGINT; +import static java.sql.Types.BINARY; +import static java.sql.Types.BIT; +import static java.sql.Types.BLOB; +import static java.sql.Types.BOOLEAN; +import static java.sql.Types.CHAR; +import static java.sql.Types.CLOB; +import static java.sql.Types.DATE; +import static java.sql.Types.DECIMAL; +import static java.sql.Types.DOUBLE; +import static java.sql.Types.FLOAT; +import static java.sql.Types.INTEGER; +import static java.sql.Types.LONGNVARCHAR; +import static java.sql.Types.LONGVARBINARY; +import static java.sql.Types.LONGVARCHAR; +import static java.sql.Types.NCHAR; +import static java.sql.Types.NUMERIC; +import static java.sql.Types.NVARCHAR; +import static java.sql.Types.REAL; +import static java.sql.Types.ROWID; +import static java.sql.Types.SMALLINT; +import static java.sql.Types.TIME; +import static java.sql.Types.TIMESTAMP; +import static java.sql.Types.TINYINT; +import static java.sql.Types.VARBINARY; +import static java.sql.Types.VARCHAR; + +@EventDriven +@InputRequirement(Requirement.INPUT_ALLOWED) +@Tags({"sql", "select", "jdbc", "query", "database"}) +@CapabilityDescription("Execute provided SQL select query. Query result will be converted to Avro format." ++ " Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on " ++ "a timer, or cron expression,
[GitHub] nifi pull request: NIFI-1575: Add QueryDatabaseTable processor
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/261#discussion_r55762021 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java --- @@ -0,0 +1,607 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.dbcp.DBCPService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ProcessorLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.OutputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.util.JdbcCommon; +import org.apache.nifi.util.LongHolder; +import org.apache.nifi.util.StopWatch; + +import java.io.IOException; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.text.DecimalFormat; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import static java.sql.Types.ARRAY; +import static java.sql.Types.BIGINT; +import static java.sql.Types.BINARY; +import static java.sql.Types.BIT; +import static java.sql.Types.BLOB; +import static java.sql.Types.BOOLEAN; +import static java.sql.Types.CHAR; +import static java.sql.Types.CLOB; +import static java.sql.Types.DATE; +import static java.sql.Types.DECIMAL; +import static java.sql.Types.DOUBLE; +import static java.sql.Types.FLOAT; +import static java.sql.Types.INTEGER; +import static java.sql.Types.LONGNVARCHAR; +import static java.sql.Types.LONGVARBINARY; +import static java.sql.Types.LONGVARCHAR; +import static java.sql.Types.NCHAR; +import static java.sql.Types.NUMERIC; +import static java.sql.Types.NVARCHAR; +import static java.sql.Types.REAL; +import static java.sql.Types.ROWID; +import static java.sql.Types.SMALLINT; +import static java.sql.Types.TIME; +import static java.sql.Types.TIMESTAMP; +import static java.sql.Types.TINYINT; +import static java.sql.Types.VARBINARY; +import static java.sql.Types.VARCHAR; + +@EventDriven +@InputRequirement(Requirement.INPUT_ALLOWED) +@Tags({"sql", "select", "jdbc", "query", "database"}) +@CapabilityDescription("Execute provided SQL select query. Query result will be converted to Avro format." ++ " Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on " ++ "a timer, or cron expression,
[GitHub] nifi pull request: NIFI-1575: Add QueryDatabaseTable processor
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/261#discussion_r55760879 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java --- @@ -0,0 +1,607 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.dbcp.DBCPService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ProcessorLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.OutputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.util.JdbcCommon; +import org.apache.nifi.util.LongHolder; +import org.apache.nifi.util.StopWatch; + +import java.io.IOException; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.text.DecimalFormat; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import static java.sql.Types.ARRAY; +import static java.sql.Types.BIGINT; +import static java.sql.Types.BINARY; +import static java.sql.Types.BIT; +import static java.sql.Types.BLOB; +import static java.sql.Types.BOOLEAN; +import static java.sql.Types.CHAR; +import static java.sql.Types.CLOB; +import static java.sql.Types.DATE; +import static java.sql.Types.DECIMAL; +import static java.sql.Types.DOUBLE; +import static java.sql.Types.FLOAT; +import static java.sql.Types.INTEGER; +import static java.sql.Types.LONGNVARCHAR; +import static java.sql.Types.LONGVARBINARY; +import static java.sql.Types.LONGVARCHAR; +import static java.sql.Types.NCHAR; +import static java.sql.Types.NUMERIC; +import static java.sql.Types.NVARCHAR; +import static java.sql.Types.REAL; +import static java.sql.Types.ROWID; +import static java.sql.Types.SMALLINT; +import static java.sql.Types.TIME; +import static java.sql.Types.TIMESTAMP; +import static java.sql.Types.TINYINT; +import static java.sql.Types.VARBINARY; +import static java.sql.Types.VARCHAR; + +@EventDriven +@InputRequirement(Requirement.INPUT_ALLOWED) +@Tags({"sql", "select", "jdbc", "query", "database"}) +@CapabilityDescription("Execute provided SQL select query. Query result will be converted to Avro format." ++ " Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on " ++ "a timer, or cron expression,
[GitHub] nifi pull request: NIFI-1575: Add QueryDatabaseTable processor
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/261#discussion_r55759929 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java --- @@ -0,0 +1,607 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.dbcp.DBCPService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ProcessorLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.OutputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.util.JdbcCommon; +import org.apache.nifi.util.LongHolder; +import org.apache.nifi.util.StopWatch; + +import java.io.IOException; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.text.DecimalFormat; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import static java.sql.Types.ARRAY; +import static java.sql.Types.BIGINT; +import static java.sql.Types.BINARY; +import static java.sql.Types.BIT; +import static java.sql.Types.BLOB; +import static java.sql.Types.BOOLEAN; +import static java.sql.Types.CHAR; +import static java.sql.Types.CLOB; +import static java.sql.Types.DATE; +import static java.sql.Types.DECIMAL; +import static java.sql.Types.DOUBLE; +import static java.sql.Types.FLOAT; +import static java.sql.Types.INTEGER; +import static java.sql.Types.LONGNVARCHAR; +import static java.sql.Types.LONGVARBINARY; +import static java.sql.Types.LONGVARCHAR; +import static java.sql.Types.NCHAR; +import static java.sql.Types.NUMERIC; +import static java.sql.Types.NVARCHAR; +import static java.sql.Types.REAL; +import static java.sql.Types.ROWID; +import static java.sql.Types.SMALLINT; +import static java.sql.Types.TIME; +import static java.sql.Types.TIMESTAMP; +import static java.sql.Types.TINYINT; +import static java.sql.Types.VARBINARY; +import static java.sql.Types.VARCHAR; + +@EventDriven +@InputRequirement(Requirement.INPUT_ALLOWED) +@Tags({"sql", "select", "jdbc", "query", "database"}) +@CapabilityDescription("Execute provided SQL select query. Query result will be converted to Avro format." ++ " Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on " ++ "a timer, or cron expression,
[GitHub] nifi pull request: NIFI-1599 Changing DatagramChannelDispatcher, s...
Github user bbende closed the pull request at: https://github.com/apache/nifi/pull/262 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1575: Add QueryDatabaseTable processor
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/261#discussion_r55753755 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/QueryDatabaseTable.java --- @@ -0,0 +1,607 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.standard; + +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.annotation.behavior.EventDriven; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.behavior.InputRequirement.Requirement; +import org.apache.nifi.annotation.behavior.Stateful; +import org.apache.nifi.annotation.behavior.WritesAttribute; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.components.state.Scope; +import org.apache.nifi.components.state.StateManager; +import org.apache.nifi.components.state.StateMap; +import org.apache.nifi.dbcp.DBCPService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ProcessorLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.io.OutputStreamCallback; +import org.apache.nifi.processor.util.StandardValidators; +import org.apache.nifi.processors.standard.util.JdbcCommon; +import org.apache.nifi.util.LongHolder; +import org.apache.nifi.util.StopWatch; + +import java.io.IOException; +import java.io.OutputStream; +import java.math.BigDecimal; +import java.sql.Connection; +import java.sql.Date; +import java.sql.ResultSet; +import java.sql.ResultSetMetaData; +import java.sql.SQLException; +import java.sql.Statement; +import java.sql.Timestamp; +import java.text.DecimalFormat; +import java.text.ParseException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import static java.sql.Types.ARRAY; +import static java.sql.Types.BIGINT; +import static java.sql.Types.BINARY; +import static java.sql.Types.BIT; +import static java.sql.Types.BLOB; +import static java.sql.Types.BOOLEAN; +import static java.sql.Types.CHAR; +import static java.sql.Types.CLOB; +import static java.sql.Types.DATE; +import static java.sql.Types.DECIMAL; +import static java.sql.Types.DOUBLE; +import static java.sql.Types.FLOAT; +import static java.sql.Types.INTEGER; +import static java.sql.Types.LONGNVARCHAR; +import static java.sql.Types.LONGVARBINARY; +import static java.sql.Types.LONGVARCHAR; +import static java.sql.Types.NCHAR; +import static java.sql.Types.NUMERIC; +import static java.sql.Types.NVARCHAR; +import static java.sql.Types.REAL; +import static java.sql.Types.ROWID; +import static java.sql.Types.SMALLINT; +import static java.sql.Types.TIME; +import static java.sql.Types.TIMESTAMP; +import static java.sql.Types.TINYINT; +import static java.sql.Types.VARBINARY; +import static java.sql.Types.VARCHAR; + +@EventDriven +@InputRequirement(Requirement.INPUT_ALLOWED) +@Tags({"sql", "select", "jdbc", "query", "database"}) +@CapabilityDescription("Execute provided SQL select query. Query result will be converted to Avro format." ++ " Streaming is used so arbitrarily large result sets are supported. This processor can be scheduled to run on " ++ "a timer, or cron expression,
[GitHub] nifi pull request: NiFi-1481 Enhancement[ nifi.sh env]
Github user apiri commented on the pull request: https://github.com/apache/nifi/pull/218#issuecomment-195034105 reviewing --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: A relationship can be auto-terminable. In this ...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/217 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1518 InferAvroSchema note has an option to...
Github user JPercivall commented on the pull request: https://github.com/apache/nifi/pull/235#issuecomment-195030029 @trkurc what was the result of your review? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1491 Throws exception when unable to delet...
Github user JPercivall commented on the pull request: https://github.com/apache/nifi/pull/227#issuecomment-195028936 @trkurc what was the result of your review? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
How to get data from iPhone
Hi I need to transfer data(images) from iPhones to HDFS using NIFI. Any suggestions Satya ambariCloud -- View this message in context: http://apache-nifi-developer-list.39713.n7.nabble.com/How-to-get-data-from-iPhone-tp7899.html Sent from the Apache NiFi Developer List mailing list archive at Nabble.com.
[GitHub] nifi pull request: NIFI-614 Added initial support for new style JM...
Github user JPercivall commented on the pull request: https://github.com/apache/nifi/pull/222#issuecomment-195006367 Following the pattern for Controller Services and their API in nifi-standard-services, JMSConnectionFactoryProviderDefinition should be in it's own package. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: Closing in on the Apache NiFi 0.6.0 release
Have updated the migration guide [1] and release notes [2] for 0.6.0. If there are features/inclusions will adjust as needed. I intend to follow a very similar RM model to what Tony did for the 0.5.x line and which follows the apparent consensus from our recent Git discussions. We'll definitely need folks to really focus in on the existing JIRAs slated for 0.6.0 and any necessary reviews or tweaks. Early next week we should start moving tickets to the next minor (0.7.0) build that aren't key bug fixes or aren't tied to previously stated 0.6.0 goals. [1] https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance [2] https://cwiki.apache.org/confluence/display/NIFI/Release+Notes Thanks Joe On Wed, Mar 9, 2016 at 6:08 PM, Tony Kurcwrote: > Joe, > I tagged this one that I've been closing in on, and was just finishing up. > > https://issues.apache.org/jira/browse/NIFI-1481 > > On Wed, Mar 9, 2016 at 5:42 PM, Joe Witt wrote: > >> Team, >> >> It is time to start pulling in for the Apache NiFi 0.6.0 release to >> keep with our previously suggested cadence. There are already a lot >> of really nice improvements/bug fixes on there and some nice new >> features. We do have about 23 outstanding JIRAs assigned that are >> open. >> >> Most appear to be in great shape with active discussion centered >> around PRs with review feedback. Some just appear to need the final >> push over the merge line. There are also some PRs on Github that may >> well be in a ready state. >> >> If there are things which are really important to folks that they do >> not presently see on the 0.6.0 list but that they really want/need and >> think are ready or close to ready please advise. >> >> I am happy to RM the release but if someone else is interested please >> advise. Let's try to shoot for Mar 16th vote start so we can be close >> to the Mar 18th goal talked about a while ago on the 6-12 month >> roadmap proposal. >> >> Thanks >> Joe >>
[GitHub] nifi pull request: NIFI-614 Added initial support for new style JM...
Github user JPercivall commented on the pull request: https://github.com/apache/nifi/pull/222#issuecomment-194997331 Reviewing --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: Mutiple dataflow jobs management(lots of jobs)
Hi Yan, We can get more into details and particulars if needed, but have you experimented with expression language? I could see a Cron driven approach which covers your periodic efforts that feeds some number of ExecuteSQL processors (perhaps one for each database you are communicating with) each having a table. This would certainly cut down on the need for the 30k processors on a one-to-one basis with a given processor. In terms of monitoring the dataflows, could you describe what else you are searching for beyond the graph view? NiFi tries to provide context for the flow of data but is not trying to be a sole monitoring, we can give information on a processor basis, but do not delve into specifics. There is a summary view for the overall flow where you can monitor stats about the components and connections in the system. We support interoperation with monitoring systems via push (ReportingTask) and pull (REST API [2]) semantics. Any other details beyond your list of how this all interoperates might shed some more light on what you are trying to accomplish. It seems like NiFi should be able to help with this. With some additional information we may be able to provide further guidance or at least get some insights on use cases we could look to improve upon and extend NiFi to support. Thanks! [1] http://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html [2] http://nifi.apache.org/docs/nifi-docs/html/developer-guide.html#reporting-tasks [3] http://nifi.apache.org/docs/nifi-docs/rest-api/index.html On Sat, Mar 5, 2016 at 9:25 PM, 刘岩wrote: > Hi All > > > i'm trying to adapt Nifi to production but can not find an admin > console which monitoring the dataflows > > >The scenarios is simple, > > >1. we gather data from oracle database to hdfs and then to hive. > >2. residules/incrementals are updated daily or monthly via Nifi. > >3. full dump on some table are excuted daily or monthly via Nifi. > > > is it really simple , however , we have 7 oracle databases with over > 30K tables needs to implement the above scenario. > > > which means that i will drag that ExcuteSQL elements for like 30K time or > so and also need to place them with a nice looking way on my little 21 inch > screen . > > > Just wondering if there is a table list like ,groupable and searchable > task control and monitoring feature for Nifi > > > > Thank you very much in advance > > > > Yan Liu > > Hortonworks Service Division > > Richinfo, Shenzhen, China (PR) > > 06/03/2016 > > > > > >
[GitHub] nifi pull request: NIFI-899 Rewrite of ListenUDP to use new listen...
GitHub user bbende opened a pull request: https://github.com/apache/nifi/pull/266 NIFI-899 Rewrite of ListenUDP to use new listener framework, includes⦠⦠the following changes: - Adding Network Interface property to AbstractListenEventProcessor and ListenSyslog - Adding sending host and sending port to DatagramChannelDispatcher - Creation of common base class AbstractListenEventBatchingProcessor - Refactor of ListenUDP, ListenTCP, and ListenRELP to all extend from AbstractListenEventBatchingProcessor You can merge this pull request into a Git repository by running: $ git pull https://github.com/bbende/nifi NIFI-899 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/266.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #266 commit 84d09498c05a5325978bf80d61e065f2ae3b15b7 Author: Bryan BendeDate: 2016-03-10T14:19:54Z NIFI-899 Rewrite of ListenUDP to use new listener framework, includes the following changes: - Adding Network Interface property to AbstractListenEventProcessor and ListenSyslog - Adding sending host and sending port to DatagramChannelDispatcher - Creation of common base class AbstractListenEventBatchingProcessor - Refactor of ListenUDP, ListenTCP, and ListenRELP to all extend from AbstractListenEventBatchingProcessor --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: Nifi 1495 - AWS Kinesis Firehose
Github user apiri commented on the pull request: https://github.com/apache/nifi/pull/213#issuecomment-194975737 @mans2singh I had a chance to sit down and revisit this. Overall, it looks good and I was able to test a flow successfully putting to a Kinesis Firehose which aggregated and dumped to S3. One thing that was mentioned prior in my initial review that we still need to cover is that of how we are handling batching. I do think we need to handle that in a more constrained fashion given that file sizes could vary widely. With how the processor is currently configured, it could hold up to 250MB in memory, by default. Instead, what would your thoughts be on converting this to a buffer size property. If people want batching, they can specify a given memory size (perhaps something like 1 MB by default) and then we can wait until that threshold is hit or no more input flowfiles are available, at which point they are sent off in a batch. If batching is not desired, they can either empty the buffer property or specify 0 bytes. Thoughts on this approach? Ultimately, we are trying to avoid people from incidentally causing issues with heap exhaustion. With the prescribed approach here, people can get as aggressive as they wish with batching and have a finitely constrained amount of space per instance. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1488 fixes
Github user apiri closed the pull request at: https://github.com/apache/nifi/pull/265 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] nifi pull request: NIFI-1521 Allows use of SSL in AMQP Processor
Github user pvillard31 commented on the pull request: https://github.com/apache/nifi/pull/232#issuecomment-194834756 I just re-based the PR. Let me know if something else is needed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
Re: Broadcasting Flow File
Yes that's correct. I didn't notice the possibility to use the same relationship multiple times. It is exactly what I wanted to perform. Thanks! 2016-03-10 13:02 GMT+01:00 Matthew Clarke: > If I am understanding correctly, you want to take a FlowFile coming from > one processor and produce multiple copies. Each copy is then sent to > different follow-on processors. Correct? This can be accomplished by simply > connecting the success relationship multiple times from the single source > processor to the multiple destination processors. NiFi will automatically > clone the FlowFile when multiple relationships of the same type are > connected. What is even better here is that NiFi does not duplicate the > content but rather just adds additional pointers to the same content for > each replicated FlowFile. > On Mar 10, 2016 4:24 AM, "Pierre Villard" > wrote: > > > Hi, > > > > I have a use case where I need to "broadcast" a flow file to multiple > > processors. At the moment, the only way I see to do that is to combine a > > DuplicateFlowFile processor with a DistributeLoad processor. It seems to > > work fine, but I am wondering if it would be more user friendly to have a > > dedicated processor for this kind of operation? If you think so, I'll be > > happy to have it done. > > > > Pierre > > >
Re: Broadcasting Flow File
If I am understanding correctly, you want to take a FlowFile coming from one processor and produce multiple copies. Each copy is then sent to different follow-on processors. Correct? This can be accomplished by simply connecting the success relationship multiple times from the single source processor to the multiple destination processors. NiFi will automatically clone the FlowFile when multiple relationships of the same type are connected. What is even better here is that NiFi does not duplicate the content but rather just adds additional pointers to the same content for each replicated FlowFile. On Mar 10, 2016 4:24 AM, "Pierre Villard"wrote: > Hi, > > I have a use case where I need to "broadcast" a flow file to multiple > processors. At the moment, the only way I see to do that is to combine a > DuplicateFlowFile processor with a DistributeLoad processor. It seems to > work fine, but I am wondering if it would be more user friendly to have a > dedicated processor for this kind of operation? If you think so, I'll be > happy to have it done. > > Pierre >
Broadcasting Flow File
Hi, I have a use case where I need to "broadcast" a flow file to multiple processors. At the moment, the only way I see to do that is to combine a DuplicateFlowFile processor with a DistributeLoad processor. It seems to work fine, but I am wondering if it would be more user friendly to have a dedicated processor for this kind of operation? If you think so, I'll be happy to have it done. Pierre