[jira] [Resolved] (NIFI-3941) Clarify tab name for controller level controller services dialog

2017-09-07 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto resolved NIFI-3941.
-
   Resolution: Fixed
 Assignee: Andy LoPresto
Fix Version/s: 1.4.0

> Clarify tab name for controller level controller services dialog
> 
>
> Key: NIFI-3941
> URL: https://issues.apache.org/jira/browse/NIFI-3941
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
>Priority: Minor
>  Labels: label, ui
> Fix For: 1.4.0
>
>
> As a follow-on to 
> [NIFI-3911|https://issues.apache.org/jira/browse/NIFI-3911], I believe the 
> *Controller Services* tab in the "global"/"controller level" *Controller 
> Settings* dialog should be renamed to *Reporting Task Controller Services* 
> and a sentence added above the table explaining that any controller services 
> added here are only accessible by reporting tasks and not accessible by any 
> components on the flow itself. 
> {quote}
> Would it make sense to change the title of the tab in the global Controller 
> Settings dialog to "Reporting Task Controller Services" and add a sentence 
> description above the table stating that these controller services are only 
> accessible by the Reporting Tasks?
> {quote}
> This is helpful for users who do not read the extensive documentation in the 
> User Guide. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3941) Clarify tab name for controller level controller services dialog

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157987#comment-16157987
 ] 

ASF GitHub Bot commented on NIFI-3941:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2124
  
Ran `contrib-check` and all tests pass. Verified in app that the global CS 
tab name is different from the flow controller services tab. +1, merging. 


> Clarify tab name for controller level controller services dialog
> 
>
> Key: NIFI-3941
> URL: https://issues.apache.org/jira/browse/NIFI-3941
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: label, ui
>
> As a follow-on to 
> [NIFI-3911|https://issues.apache.org/jira/browse/NIFI-3911], I believe the 
> *Controller Services* tab in the "global"/"controller level" *Controller 
> Settings* dialog should be renamed to *Reporting Task Controller Services* 
> and a sentence added above the table explaining that any controller services 
> added here are only accessible by reporting tasks and not accessible by any 
> components on the flow itself. 
> {quote}
> Would it make sense to change the title of the tab in the global Controller 
> Settings dialog to "Reporting Task Controller Services" and add a sentence 
> description above the table stating that these controller services are only 
> accessible by the Reporting Tasks?
> {quote}
> This is helpful for users who do not read the extensive documentation in the 
> User Guide. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2124: NIFI-3941 - Clarify tab name for controller...

2017-09-07 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2124
  
Ran `contrib-check` and all tests pass. Verified in app that the global CS 
tab name is different from the flow controller services tab. +1, merging. 


---


[jira] [Commented] (NIFI-3941) Clarify tab name for controller level controller services dialog

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157986#comment-16157986
 ] 

ASF GitHub Bot commented on NIFI-3941:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2124


> Clarify tab name for controller level controller services dialog
> 
>
> Key: NIFI-3941
> URL: https://issues.apache.org/jira/browse/NIFI-3941
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: label, ui
>
> As a follow-on to 
> [NIFI-3911|https://issues.apache.org/jira/browse/NIFI-3911], I believe the 
> *Controller Services* tab in the "global"/"controller level" *Controller 
> Settings* dialog should be renamed to *Reporting Task Controller Services* 
> and a sentence added above the table explaining that any controller services 
> added here are only accessible by reporting tasks and not accessible by any 
> components on the flow itself. 
> {quote}
> Would it make sense to change the title of the tab in the global Controller 
> Settings dialog to "Reporting Task Controller Services" and add a sentence 
> description above the table stating that these controller services are only 
> accessible by the Reporting Tasks?
> {quote}
> This is helpful for users who do not read the extensive documentation in the 
> User Guide. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3941) Clarify tab name for controller level controller services dialog

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157985#comment-16157985
 ] 

ASF subversion and git services commented on NIFI-3941:
---

Commit 20d23e836ea28198a2cc2234d09362b674f2a19d in nifi's branch 
refs/heads/master from [~yuri1969]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=20d23e8 ]

NIFI-3941 - Clarify tab name for controller-level controller services dialog

* Changed the tab title since sharing the name makes things
less clear for newcomers.
* Suggested info sentence is omitted.

This closes #2124.

Signed-off-by: Andy LoPresto 


> Clarify tab name for controller level controller services dialog
> 
>
> Key: NIFI-3941
> URL: https://issues.apache.org/jira/browse/NIFI-3941
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: label, ui
>
> As a follow-on to 
> [NIFI-3911|https://issues.apache.org/jira/browse/NIFI-3911], I believe the 
> *Controller Services* tab in the "global"/"controller level" *Controller 
> Settings* dialog should be renamed to *Reporting Task Controller Services* 
> and a sentence added above the table explaining that any controller services 
> added here are only accessible by reporting tasks and not accessible by any 
> components on the flow itself. 
> {quote}
> Would it make sense to change the title of the tab in the global Controller 
> Settings dialog to "Reporting Task Controller Services" and add a sentence 
> description above the table stating that these controller services are only 
> accessible by the Reporting Tasks?
> {quote}
> This is helpful for users who do not read the extensive documentation in the 
> User Guide. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2124: NIFI-3941 - Clarify tab name for controller...

2017-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2124


---


[jira] [Commented] (NIFI-3941) Clarify tab name for controller level controller services dialog

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157934#comment-16157934
 ] 

ASF GitHub Bot commented on NIFI-3941:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2124
  
Reviewing...


> Clarify tab name for controller level controller services dialog
> 
>
> Key: NIFI-3941
> URL: https://issues.apache.org/jira/browse/NIFI-3941
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Affects Versions: 1.2.0
>Reporter: Andy LoPresto
>Priority: Minor
>  Labels: label, ui
>
> As a follow-on to 
> [NIFI-3911|https://issues.apache.org/jira/browse/NIFI-3911], I believe the 
> *Controller Services* tab in the "global"/"controller level" *Controller 
> Settings* dialog should be renamed to *Reporting Task Controller Services* 
> and a sentence added above the table explaining that any controller services 
> added here are only accessible by reporting tasks and not accessible by any 
> components on the flow itself. 
> {quote}
> Would it make sense to change the title of the tab in the global Controller 
> Settings dialog to "Reporting Task Controller Services" and add a sentence 
> description above the table stating that these controller services are only 
> accessible by the Reporting Tasks?
> {quote}
> This is helpful for users who do not read the extensive documentation in the 
> User Guide. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2124: NIFI-3941 - Clarify tab name for controller...

2017-09-07 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2124
  
Reviewing...


---


[jira] [Resolved] (NIFI-4335) Update Listen* components to refer to RestrictedSSLContextService

2017-09-07 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto resolved NIFI-4335.
-
   Resolution: Fixed
Fix Version/s: 1.4.0

> Update Listen* components to refer to RestrictedSSLContextService
> -
>
> Key: NIFI-4335
> URL: https://issues.apache.org/jira/browse/NIFI-4335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Michael Hogue
>  Labels: jetty, security, tls
> Fix For: 1.4.0
>
>
> [~m-hogue] added a {{RestrictedSSLContextService}} in NIFI-2528 due to 
> discussions regarding the baseline TLS protocol versions that are supported 
> as of NiFi 1.2.0. The {{ListenHTTP}} processor was updated to require this 
> new interface, but other {{Listen*}} processors and services need to be 
> updated as well. 
> From discussion on PR 1986: 
> {quote}
> Also, as @joewitt noted earlier, we should change the available interface for 
> other "listener" processors. Here's a preliminary list I put together, but I 
> would like confirmation from another member:
> * `HandleHTTPRequest`
> * `ListenBeats`
> * 
> `DistributedCacheServer`/`DistributedSetCacheServer`/`DistributedMapCacheServer`
> * `ListenSMTP`
> * `ListenGRPC`
> * `ListenLumberjack` (*Deprecated*)
> * `ListenRELP`
> * `ListenSyslog`
> * `ListenTCP`/`ListenTCPRecord`
> Also:
> * `AbstractSiteToSiteReportingTask`
> * `org.apache.nifi.processors.slack.TestServer`
> * `WebSocketService`/`JettyWebSocketService`
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2131: NIFI-4335: Changed SSL Context Service interfaces t...

2017-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2131


---


[jira] [Commented] (NIFI-4335) Update Listen* components to refer to RestrictedSSLContextService

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157927#comment-16157927
 ] 

ASF GitHub Bot commented on NIFI-4335:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2131


> Update Listen* components to refer to RestrictedSSLContextService
> -
>
> Key: NIFI-4335
> URL: https://issues.apache.org/jira/browse/NIFI-4335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Michael Hogue
>  Labels: jetty, security, tls
>
> [~m-hogue] added a {{RestrictedSSLContextService}} in NIFI-2528 due to 
> discussions regarding the baseline TLS protocol versions that are supported 
> as of NiFi 1.2.0. The {{ListenHTTP}} processor was updated to require this 
> new interface, but other {{Listen*}} processors and services need to be 
> updated as well. 
> From discussion on PR 1986: 
> {quote}
> Also, as @joewitt noted earlier, we should change the available interface for 
> other "listener" processors. Here's a preliminary list I put together, but I 
> would like confirmation from another member:
> * `HandleHTTPRequest`
> * `ListenBeats`
> * 
> `DistributedCacheServer`/`DistributedSetCacheServer`/`DistributedMapCacheServer`
> * `ListenSMTP`
> * `ListenGRPC`
> * `ListenLumberjack` (*Deprecated*)
> * `ListenRELP`
> * `ListenSyslog`
> * `ListenTCP`/`ListenTCPRecord`
> Also:
> * `AbstractSiteToSiteReportingTask`
> * `org.apache.nifi.processors.slack.TestServer`
> * `WebSocketService`/`JettyWebSocketService`
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4335) Update Listen* components to refer to RestrictedSSLContextService

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157925#comment-16157925
 ] 

ASF subversion and git services commented on NIFI-4335:
---

Commit 03e51ee8acea7d72a13aea96f60bb726087136ee in nifi's branch 
refs/heads/master from m-hogue
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=03e51ee ]

NIFI-4335: Changed SSLContextService implementations to 
RestrictedSSLContextService for all Listen* processors

This closes #2131.

Signed-off-by: Andy LoPresto 


> Update Listen* components to refer to RestrictedSSLContextService
> -
>
> Key: NIFI-4335
> URL: https://issues.apache.org/jira/browse/NIFI-4335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Michael Hogue
>  Labels: jetty, security, tls
>
> [~m-hogue] added a {{RestrictedSSLContextService}} in NIFI-2528 due to 
> discussions regarding the baseline TLS protocol versions that are supported 
> as of NiFi 1.2.0. The {{ListenHTTP}} processor was updated to require this 
> new interface, but other {{Listen*}} processors and services need to be 
> updated as well. 
> From discussion on PR 1986: 
> {quote}
> Also, as @joewitt noted earlier, we should change the available interface for 
> other "listener" processors. Here's a preliminary list I put together, but I 
> would like confirmation from another member:
> * `HandleHTTPRequest`
> * `ListenBeats`
> * 
> `DistributedCacheServer`/`DistributedSetCacheServer`/`DistributedMapCacheServer`
> * `ListenSMTP`
> * `ListenGRPC`
> * `ListenLumberjack` (*Deprecated*)
> * `ListenRELP`
> * `ListenSyslog`
> * `ListenTCP`/`ListenTCPRecord`
> Also:
> * `AbstractSiteToSiteReportingTask`
> * `org.apache.nifi.processors.slack.TestServer`
> * `WebSocketService`/`JettyWebSocketService`
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4335) Update Listen* components to refer to RestrictedSSLContextService

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157928#comment-16157928
 ] 

ASF GitHub Bot commented on NIFI-4335:
--

Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2131
  
Ran `contrib-check` and all tests pass. Manually verified some of the 
components in a running environment. +1, merging. 


> Update Listen* components to refer to RestrictedSSLContextService
> -
>
> Key: NIFI-4335
> URL: https://issues.apache.org/jira/browse/NIFI-4335
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework, Extensions
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Michael Hogue
>  Labels: jetty, security, tls
>
> [~m-hogue] added a {{RestrictedSSLContextService}} in NIFI-2528 due to 
> discussions regarding the baseline TLS protocol versions that are supported 
> as of NiFi 1.2.0. The {{ListenHTTP}} processor was updated to require this 
> new interface, but other {{Listen*}} processors and services need to be 
> updated as well. 
> From discussion on PR 1986: 
> {quote}
> Also, as @joewitt noted earlier, we should change the available interface for 
> other "listener" processors. Here's a preliminary list I put together, but I 
> would like confirmation from another member:
> * `HandleHTTPRequest`
> * `ListenBeats`
> * 
> `DistributedCacheServer`/`DistributedSetCacheServer`/`DistributedMapCacheServer`
> * `ListenSMTP`
> * `ListenGRPC`
> * `ListenLumberjack` (*Deprecated*)
> * `ListenRELP`
> * `ListenSyslog`
> * `ListenTCP`/`ListenTCPRecord`
> Also:
> * `AbstractSiteToSiteReportingTask`
> * `org.apache.nifi.processors.slack.TestServer`
> * `WebSocketService`/`JettyWebSocketService`
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2131: NIFI-4335: Changed SSL Context Service interfaces to Restr...

2017-09-07 Thread alopresto
Github user alopresto commented on the issue:

https://github.com/apache/nifi/pull/2131
  
Ran `contrib-check` and all tests pass. Manually verified some of the 
components in a running environment. +1, merging. 


---


[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r137021454
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
+*/
+  explicit Bin(uint64_t minSize, uint64_t maxSize, int minEntries, int 
maxEntries, std::string fileCount, std::string groupId)
+  : minSize_(minSize), maxSize_(maxSize), maxEntries_(maxEntries), 
minEntries_(minEntries), fileCount_(fileCount),
+groupId_(groupId), 
logger_(logging::LoggerFactory::getLogger()) {
+queued_data_size_ = 0;
+creation_dated_ = getTimeMillis();
+std::shared_ptr id_generator = 
utils::IdGenerator::getIdGenerator();
+char uuidStr[37] = { 0 };
+id_generator->generate(uuid_);
+uuid_unparse_lower(uuid_, uuidStr);
+uuid_str_ = uuidStr;
+logger_->log_info("Bin %s for group %s created", uuid_str_, groupId_);
+  }
+  virtual ~Bin() {
+logger_->log_info("Bin %s for group %s destroyed", uuid_str_, 
groupId_);
+  }
+  // check whether the bin is full
+  bool isFull() {
+if (queued_data_size_ >= maxSize_ || queue_.size() >= maxEntries_)
+  return true;
+else
+  return false;
+  }
+  // check whether the bin meet the min required size and entries
+  bool isFullEnough() {
+return isFull() || (queued_data_size_ >= minSize_ && queue_.size() >= 
minEntries_);
+  }
+  // check whether the bin is older than the time specified in msec
+  bool isOlderThan(uint64_t duration) {
+uint64_t currentTime = getTimeMillis();
+if (currentTime > (creation_dated_ + duration))
+  return true;
+else
+  return false;
+  }
+  std::deque & getFlowFile() {
+return queue_;
+  }
+  // offer the flowfile to the bin
+  bool offer(std::shared_ptr flow) {
+if (!fileCount_.empty()) {
+  std::string value;
+  if (flow->getAttribute(fileCount_, value)) {
+try {
+  // for defrag case using the identification
+  int count = std::stoi(value);
+  maxEntries_ = count;
+  minEntries_ = count;
+} catch (...) {
+
+}
+  }
+}
+
+if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() 
+ 1) > maxEntries_)
+  return false;
+
+queue_.push_back(flow);
+queued_data_size_ += flow->getSize();
+logger_->log_info("Bin %s for group %s offer size %d byte %d min_entry 
%d max_entry %d",
+uuid_str_, groupId_, queue_.size(), queued_data_size_, 
minEntries_, maxEntries_);
+
+return true;
+  }
+  // getBinAge
+  uint64_t getBinAge() {
+return creation_dated_;
+  }
+  int getSize() {
+return queue_.size();
+  }
+  // Get the UUID as string
+  std::string getUUIDStr() {
+return uuid_str_;
+  }
+  std::string getGroupId() {
+return groupId_;
+  }
+
+ protected:
+
+ private:
+  uint64_t minSize_;
+  uint64_t maxSize_;
+  int maxEntries_;
+  int minEntries_;
+  // Queued data size
+  uint64_t queued_data_size_;
+   // Queue for the Flow File
+  std::deque queue_;
+  uint64_t creation_dated_;
+  

[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r137022742
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
+*/
+  explicit Bin(uint64_t minSize, uint64_t maxSize, int minEntries, int 
maxEntries, std::string fileCount, std::string groupId)
+  : minSize_(minSize), maxSize_(maxSize), maxEntries_(maxEntries), 
minEntries_(minEntries), fileCount_(fileCount),
+groupId_(groupId), 
logger_(logging::LoggerFactory::getLogger()) {
+queued_data_size_ = 0;
+creation_dated_ = getTimeMillis();
+std::shared_ptr id_generator = 
utils::IdGenerator::getIdGenerator();
+char uuidStr[37] = { 0 };
+id_generator->generate(uuid_);
+uuid_unparse_lower(uuid_, uuidStr);
+uuid_str_ = uuidStr;
+logger_->log_info("Bin %s for group %s created", uuid_str_, groupId_);
+  }
+  virtual ~Bin() {
+logger_->log_info("Bin %s for group %s destroyed", uuid_str_, 
groupId_);
+  }
+  // check whether the bin is full
+  bool isFull() {
+if (queued_data_size_ >= maxSize_ || queue_.size() >= maxEntries_)
+  return true;
+else
+  return false;
+  }
+  // check whether the bin meet the min required size and entries
+  bool isFullEnough() {
--- End diff --

Can we make this clearer? Enough is not very descriptive. I know the Java 
variant uses this nomenclature, but I think we can do a little better for other 
developers. 


---


[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r137023241
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
+*/
+  explicit Bin(uint64_t minSize, uint64_t maxSize, int minEntries, int 
maxEntries, std::string fileCount, std::string groupId)
+  : minSize_(minSize), maxSize_(maxSize), maxEntries_(maxEntries), 
minEntries_(minEntries), fileCount_(fileCount),
+groupId_(groupId), 
logger_(logging::LoggerFactory::getLogger()) {
+queued_data_size_ = 0;
+creation_dated_ = getTimeMillis();
+std::shared_ptr id_generator = 
utils::IdGenerator::getIdGenerator();
+char uuidStr[37] = { 0 };
+id_generator->generate(uuid_);
+uuid_unparse_lower(uuid_, uuidStr);
+uuid_str_ = uuidStr;
+logger_->log_info("Bin %s for group %s created", uuid_str_, groupId_);
+  }
+  virtual ~Bin() {
+logger_->log_info("Bin %s for group %s destroyed", uuid_str_, 
groupId_);
+  }
+  // check whether the bin is full
+  bool isFull() {
+if (queued_data_size_ >= maxSize_ || queue_.size() >= maxEntries_)
+  return true;
+else
+  return false;
+  }
+  // check whether the bin meet the min required size and entries
+  bool isFullEnough() {
+return isFull() || (queued_data_size_ >= minSize_ && queue_.size() >= 
minEntries_);
+  }
+  // check whether the bin is older than the time specified in msec
+  bool isOlderThan(uint64_t duration) {
+uint64_t currentTime = getTimeMillis();
+if (currentTime > (creation_dated_ + duration))
+  return true;
+else
+  return false;
+  }
+  std::deque & getFlowFile() {
+return queue_;
+  }
+  // offer the flowfile to the bin
+  bool offer(std::shared_ptr flow) {
+if (!fileCount_.empty()) {
+  std::string value;
+  if (flow->getAttribute(fileCount_, value)) {
+try {
+  // for defrag case using the identification
+  int count = std::stoi(value);
+  maxEntries_ = count;
+  minEntries_ = count;
+} catch (...) {
+
+}
+  }
+}
+
+if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() 
+ 1) > maxEntries_)
+  return false;
+
+queue_.push_back(flow);
+queued_data_size_ += flow->getSize();
+logger_->log_info("Bin %s for group %s offer size %d byte %d min_entry 
%d max_entry %d",
+uuid_str_, groupId_, queue_.size(), queued_data_size_, 
minEntries_, maxEntries_);
+
+return true;
+  }
+  // getBinAge
+  uint64_t getBinAge() {
+return creation_dated_;
+  }
+  int getSize() {
+return queue_.size();
+  }
+  // Get the UUID as string
+  std::string getUUIDStr() {
+return uuid_str_;
+  }
+  std::string getGroupId() {
+return groupId_;
+  }
+
+ protected:
+
+ private:
+  uint64_t minSize_;
+  uint64_t maxSize_;
+  int maxEntries_;
+  int minEntries_;
+  // Queued data size
+  uint64_t queued_data_size_;
+   // Queue for the Flow File
+  std::deque queue_;
+  uint64_t creation_dated_;
+  

[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r137022835
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
+*/
+  explicit Bin(uint64_t minSize, uint64_t maxSize, int minEntries, int 
maxEntries, std::string fileCount, std::string groupId)
+  : minSize_(minSize), maxSize_(maxSize), maxEntries_(maxEntries), 
minEntries_(minEntries), fileCount_(fileCount),
+groupId_(groupId), 
logger_(logging::LoggerFactory::getLogger()) {
+queued_data_size_ = 0;
+creation_dated_ = getTimeMillis();
+std::shared_ptr id_generator = 
utils::IdGenerator::getIdGenerator();
+char uuidStr[37] = { 0 };
+id_generator->generate(uuid_);
+uuid_unparse_lower(uuid_, uuidStr);
+uuid_str_ = uuidStr;
+logger_->log_info("Bin %s for group %s created", uuid_str_, groupId_);
+  }
+  virtual ~Bin() {
+logger_->log_info("Bin %s for group %s destroyed", uuid_str_, 
groupId_);
+  }
+  // check whether the bin is full
+  bool isFull() {
+if (queued_data_size_ >= maxSize_ || queue_.size() >= maxEntries_)
+  return true;
+else
+  return false;
+  }
+  // check whether the bin meet the min required size and entries
+  bool isFullEnough() {
+return isFull() || (queued_data_size_ >= minSize_ && queue_.size() >= 
minEntries_);
+  }
+  // check whether the bin is older than the time specified in msec
+  bool isOlderThan(uint64_t duration) {
--- End diff --

const ref?


---


[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r136140985
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
+*/
+  explicit Bin(uint64_t minSize, uint64_t maxSize, int minEntries, int 
maxEntries, std::string fileCount, std::string groupId)
--- End diff --

Any reason why the can't be passed in by const ref?


---


[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r136137946
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
+*/
+  explicit Bin(uint64_t minSize, uint64_t maxSize, int minEntries, int 
maxEntries, std::string fileCount, std::string groupId)
+  : minSize_(minSize), maxSize_(maxSize), maxEntries_(maxEntries), 
minEntries_(minEntries), fileCount_(fileCount),
+groupId_(groupId), 
logger_(logging::LoggerFactory::getLogger()) {
+queued_data_size_ = 0;
+creation_dated_ = getTimeMillis();
+std::shared_ptr id_generator = 
utils::IdGenerator::getIdGenerator();
+char uuidStr[37] = { 0 };
+id_generator->generate(uuid_);
+uuid_unparse_lower(uuid_, uuidStr);
+uuid_str_ = uuidStr;
+logger_->log_info("Bin %s for group %s created", uuid_str_, groupId_);
+  }
+  virtual ~Bin() {
+logger_->log_info("Bin %s for group %s destroyed", uuid_str_, 
groupId_);
+  }
+  // check whether the bin is full
+  bool isFull() {
+if (queued_data_size_ >= maxSize_ || queue_.size() >= maxEntries_)
+  return true;
+else
+  return false;
+  }
+  // check whether the bin meet the min required size and entries
+  bool isFullEnough() {
+return isFull() || (queued_data_size_ >= minSize_ && queue_.size() >= 
minEntries_);
+  }
+  // check whether the bin is older than the time specified in msec
+  bool isOlderThan(uint64_t duration) {
+uint64_t currentTime = getTimeMillis();
+if (currentTime > (creation_dated_ + duration))
+  return true;
+else
+  return false;
+  }
+  std::deque & getFlowFile() {
+return queue_;
+  }
+  // offer the flowfile to the bin
+  bool offer(std::shared_ptr flow) {
+if (!fileCount_.empty()) {
+  std::string value;
+  if (flow->getAttribute(fileCount_, value)) {
+try {
+  // for defrag case using the identification
+  int count = std::stoi(value);
+  maxEntries_ = count;
+  minEntries_ = count;
+} catch (...) {
+
+}
+  }
+}
+
+if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() 
+ 1) > maxEntries_)
+  return false;
+
+queue_.push_back(flow);
+queued_data_size_ += flow->getSize();
+logger_->log_info("Bin %s for group %s offer size %d byte %d min_entry 
%d max_entry %d",
+uuid_str_, groupId_, queue_.size(), queued_data_size_, 
minEntries_, maxEntries_);
+
+return true;
+  }
+  // getBinAge
+  uint64_t getBinAge() {
+return creation_dated_;
+  }
+  int getSize() {
+return queue_.size();
+  }
+  // Get the UUID as string
+  std::string getUUIDStr() {
+return uuid_str_;
+  }
+  std::string getGroupId() {
+return groupId_;
+  }
+
+ protected:
+
+ private:
+  uint64_t minSize_;
+  uint64_t maxSize_;
+  int maxEntries_;
+  int minEntries_;
+  // Queued data size
+  uint64_t queued_data_size_;
+   // Queue for the Flow File
+  std::deque queue_;
+  uint64_t creation_dated_;
+  

[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r137020354
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
--- End diff --

We should have a comment that this object is not thread safe. 


---


[GitHub] nifi-minifi-cpp pull request #133: Merge Content processor

2017-09-07 Thread phrocker
Github user phrocker commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/133#discussion_r137021281
  
--- Diff: libminifi/include/processors/BinFiles.h ---
@@ -0,0 +1,295 @@
+/**
+ * @file BinFiles.h
+ * BinFiles class declaration
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef __BIN_FILES_H__
+#define __BIN_FILES_H__
+
+#include 
+#include 
+#include 
+#include "FlowFileRecord.h"
+#include "core/Processor.h"
+#include "core/ProcessSession.h"
+#include "core/Core.h"
+#include "core/Resource.h"
+#include "core/logging/LoggerConfiguration.h"
+#include "utils/Id.h"
+
+namespace org {
+namespace apache {
+namespace nifi {
+namespace minifi {
+namespace processors {
+
+// Bin Class
+class Bin {
+ public:
+  // Constructor
+   /*!
+* Create a new Bin
+*/
+  explicit Bin(uint64_t minSize, uint64_t maxSize, int minEntries, int 
maxEntries, std::string fileCount, std::string groupId)
+  : minSize_(minSize), maxSize_(maxSize), maxEntries_(maxEntries), 
minEntries_(minEntries), fileCount_(fileCount),
+groupId_(groupId), 
logger_(logging::LoggerFactory::getLogger()) {
+queued_data_size_ = 0;
+creation_dated_ = getTimeMillis();
+std::shared_ptr id_generator = 
utils::IdGenerator::getIdGenerator();
+char uuidStr[37] = { 0 };
+id_generator->generate(uuid_);
+uuid_unparse_lower(uuid_, uuidStr);
+uuid_str_ = uuidStr;
+logger_->log_info("Bin %s for group %s created", uuid_str_, groupId_);
+  }
+  virtual ~Bin() {
+logger_->log_info("Bin %s for group %s destroyed", uuid_str_, 
groupId_);
+  }
+  // check whether the bin is full
+  bool isFull() {
+if (queued_data_size_ >= maxSize_ || queue_.size() >= maxEntries_)
+  return true;
+else
+  return false;
+  }
+  // check whether the bin meet the min required size and entries
+  bool isFullEnough() {
+return isFull() || (queued_data_size_ >= minSize_ && queue_.size() >= 
minEntries_);
+  }
+  // check whether the bin is older than the time specified in msec
+  bool isOlderThan(uint64_t duration) {
+uint64_t currentTime = getTimeMillis();
+if (currentTime > (creation_dated_ + duration))
+  return true;
+else
+  return false;
+  }
+  std::deque & getFlowFile() {
+return queue_;
+  }
+  // offer the flowfile to the bin
+  bool offer(std::shared_ptr flow) {
+if (!fileCount_.empty()) {
+  std::string value;
+  if (flow->getAttribute(fileCount_, value)) {
+try {
+  // for defrag case using the identification
+  int count = std::stoi(value);
+  maxEntries_ = count;
+  minEntries_ = count;
+} catch (...) {
+
+}
+  }
+}
+
+if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() 
+ 1) > maxEntries_)
+  return false;
+
+queue_.push_back(flow);
+queued_data_size_ += flow->getSize();
+logger_->log_info("Bin %s for group %s offer size %d byte %d min_entry 
%d max_entry %d",
+uuid_str_, groupId_, queue_.size(), queued_data_size_, 
minEntries_, maxEntries_);
+
+return true;
+  }
+  // getBinAge
+  uint64_t getBinAge() {
+return creation_dated_;
+  }
+  int getSize() {
+return queue_.size();
+  }
+  // Get the UUID as string
+  std::string getUUIDStr() {
+return uuid_str_;
+  }
+  std::string getGroupId() {
+return groupId_;
+  }
+
+ protected:
+
+ private:
+  uint64_t minSize_;
+  uint64_t maxSize_;
+  int maxEntries_;
+  int minEntries_;
+  // Queued data size
+  uint64_t queued_data_size_;
+   // Queue for the Flow File
+  std::deque queue_;
+  uint64_t creation_dated_;
+  

[jira] [Comment Edited] (NIFI-4332) Add NiFi Shell for interacting with NiFi REST API

2017-09-07 Thread Daniel Chaffelson (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157728#comment-16157728
 ] 

Daniel Chaffelson edited comment on NIFI-4332 at 9/7/17 9:55 PM:
-

In the interest of -resolving- adding to this issue I have started ernest work 
on a Python wrapper for the Rest API by leveraging the Swagger definition 
produced at build time to create a base API interface in native Python, and 
then a wrapper module to provide users with much-requested high level 
functions. The use of the Swagger definition provides the full Rest API as 
callable python methods and properties with minimal additional coding, with 
easy integration into more abstract commands like 'deploy template'.
https://github.com/Chaffelson/nipyapi
This is mainly driven by several large users wanting NiFi as a layer in their 
data-movement-as-a-service platform, where a user portal drives instantiation 
of templates to complete movement tasks which leverage NiFi's other desirable 
framework features, but make little initial use of the GUI apart from 
administrative monitoring tasks.
Contributions and advice on how this can integrate with current development 
efforts very welcome.

I am also aware of a project in Java focussed around Template deployment, but 
with good potential for expansion
https://github.com/hermannpencole/nifi-config


was (Author: chaffelson):
In the interest of -resolving- adding to this issue I have started ernest work 
on a Python wrapper for the Rest API by leveraging the Swagger definition 
produced at build time to create a base API interface in native Python, and 
then a wrapper module to provide users with much-requested high level 
functions. The use of the Swagger definition provides the full Rest API as 
callable python methods and properties with minimal additional coding, with 
easy integration into more abstract commands like 'deploy template'.
https://github.com/Chaffelson/nipyapi
This is mainly driven by several large users wanting NiFi as a layer in their 
data-movement-as-a-service platform, where a user portal drives instantiation 
of templates to complete movement tasks which leverage NiFi's other desirable 
framework features, but make little initial use of the GUI apart from 
administrative monitoring tasks.

I am also aware of a project in Java focussed around Template deployment, but 
with good potential for expansion
https://github.com/hermannpencole/nifi-config

> Add NiFi Shell for interacting with NiFi REST API
> -
>
> Key: NIFI-4332
> URL: https://issues.apache.org/jira/browse/NIFI-4332
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> There are several permutations of nifi shells floating around on Github. The 
> fact that so many of these exists tells me its a feature people want. I 
> propose we add a NiFi shell to the official project that people can use for 
> official interaction with the NiFi REST API. While shells are typically not 
> written in Java I feel quite strongly in our case using Java would be the 
> best fit. Using Java would allow us to use reflection on the "nifi-web-api" 
> layer to reflected expected types, paths, responses, etc with minimal coding 
> effort.
> I expect there will be many more features that can be added to this shell but 
> as a minimal starting point the shell should allow an end user to interact 
> with all of the NiFi REST API endpoints defined at 
> https://nifi.apache.org/docs/nifi-docs/rest-api/index.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4332) Add NiFi Shell for interacting with NiFi REST API

2017-09-07 Thread Daniel Chaffelson (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157728#comment-16157728
 ] 

Daniel Chaffelson commented on NIFI-4332:
-

In the interest of -resolving- adding to this issue I have started ernest work 
on a Python wrapper for the Rest API by leveraging the Swagger definition 
produced at build time to create a base API interface in native Python, and 
then a wrapper module to provide users with much-requested high level 
functions. The use of the Swagger definition provides the full Rest API as 
callable python methods and properties with minimal additional coding, with 
easy integration into more abstract commands like 'deploy template'.
https://github.com/Chaffelson/nipyapi
This is mainly driven by several large users wanting NiFi as a layer in their 
data-movement-as-a-service platform, where a user portal drives instantiation 
of templates to complete movement tasks which leverage NiFi's other desirable 
framework features, but make little initial use of the GUI apart from 
administrative monitoring tasks.

I am also aware of a project in Java focussed around Template deployment, but 
with good potential for expansion
https://github.com/hermannpencole/nifi-config

> Add NiFi Shell for interacting with NiFi REST API
> -
>
> Key: NIFI-4332
> URL: https://issues.apache.org/jira/browse/NIFI-4332
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Jeremy Dyer
>Assignee: Jeremy Dyer
>
> There are several permutations of nifi shells floating around on Github. The 
> fact that so many of these exists tells me its a feature people want. I 
> propose we add a NiFi shell to the official project that people can use for 
> official interaction with the NiFi REST API. While shells are typically not 
> written in Java I feel quite strongly in our case using Java would be the 
> best fit. Using Java would allow us to use reflection on the "nifi-web-api" 
> layer to reflected expected types, paths, responses, etc with minimal coding 
> effort.
> I expect there will be many more features that can be added to this shell but 
> as a minimal starting point the shell should allow an end user to interact 
> with all of the NiFi REST API endpoints defined at 
> https://nifi.apache.org/docs/nifi-docs/rest-api/index.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4366) XML Record Reader

2017-09-07 Thread Pierre Villard (JIRA)
Pierre Villard created NIFI-4366:


 Summary: XML Record Reader
 Key: NIFI-4366
 URL: https://issues.apache.org/jira/browse/NIFI-4366
 Project: Apache NiFi
  Issue Type: New Feature
  Components: Extensions
Reporter: Pierre Villard


Create a XML Record Reader that can be used in record-oriented processors with 
XML data as input.

The XML Reader should rely on a schema registry to ensure that the input data 
can be converted to records. This will be helpful to avoid issues in case we're 
reading a XML document with a single element that should, in fact, be an array.

It's also necessary to define how different XML structures will be converted 
into records.

{noformat}

{noformat}

{noformat}
value
{noformat}

{noformat}
some text
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4365) Issue with "primary node only"

2017-09-07 Thread Andre Labbe (JIRA)
Andre Labbe created NIFI-4365:
-

 Summary: Issue with "primary node only"
 Key: NIFI-4365
 URL: https://issues.apache.org/jira/browse/NIFI-4365
 Project: Apache NiFi
  Issue Type: Bug
  Components: Configuration
Affects Versions: 1.1.1
 Environment: Linux
Reporter: Andre Labbe


I am new to Nifi and this site. Please let me know if this is not the correct 
place to post this.

I have a Nifi cluster with two nodes, I have created a flow that I configured 
to run with the "primary node only" setting. The issue is that if I configure 
it to run on "primary node only" the it dose not run. When I reconfigure it to 
run on "All nodes" it runs fine but I get twice the data that I need. 

Is there a trick to using "primary node only"?
Did I miss configure the cluster?

Any assistance would be appreciated. 

Thank you.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during databas

2017-09-07 Thread bruce lowther (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157635#comment-16157635
 ] 

bruce lowther edited comment on NIFI-4238 at 9/7/17 8:54 PM:
-

Just to remove most of the database types from the equation, I created table:
{code:sql}
create table footest (first text, second text, third text);
insert into footest (first,second,third) values ('one','two','three');
{code}

when I configure the QueryDatabaseTable, I get similar error:
{code:none}
2017-09-07 20:46:26,468 ERROR [Timer-Driven Process Thread-57] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=015e134c-5c1d-1689-deb5-be21b71befec]
Unable to execute SQL select query SELECT first,second,third FROM footest due 
to org.apache.nifi.processor.exception.ProcessException: Error during database 
qu
ery or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: SQLite JDBC: inconsistent internal 
state
at org.sqlite.core.CoreResultSet.checkCol(CoreResultSet.java:81)
at 
org.sqlite.jdbc3.JDBC3ResultSet.getColumnCount(JDBC3ResultSet.java:699)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:423)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:242)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:303)
... 13 common frames omitted
{code}


When I run this from 'ExecuteSQL' I get a flow file with one row.
{code:none}
[ {
  "first" : "one",
  "second" : "two",
  "third" : "three"
} ]
{code}


was (Author: osunderdog):
Just to remove most of the database types from the equation, I created table:
{code:sql}
create table footest (first text, second text, third text);
insert into footest (first,second,third) values ('one','two','three');
{code}

when I configure the QueryDatabaseTable, I get similar error:
{code:text}
2017-09-07 20:46:26,468 ERROR [Timer-Driven Process Thread-57] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=015e134c-5c1d-1689-deb5-be21b71befec]
Unable to execute SQL select query SELECT first,second,third FROM footest due 
to org.apache.nifi.processor.exception.ProcessException: Error during database 
qu
ery or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at 

[jira] [Comment Edited] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during databas

2017-09-07 Thread bruce lowther (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157603#comment-16157603
 ] 

bruce lowther edited comment on NIFI-4238 at 9/7/17 8:54 PM:
-

Matt, thank you for responding.  Here is the table creation I'm using for the 
sqlite table:

{code:sql}
create table mamtx_watermark (
   id integer primary key autoincrement,
   dt datetime default (datetime('now')),
   site text,
   app text,
   state text
   );
{code}

I tried the SQL presented in the error logs against the sqlite3 database and it 
works as expected:
{code:sql}
sqlite> SELECT id, datetime(dt) as 'dt', site, app, state FROM mamtx_watermark 
WHERE id > 3;
4|2017-09-07 19:27:50|FOO|TST|INCREMENTAL
5|2017-09-07 19:25:43|BAR|TST|NEW
6|2017-09-07 19:25:45|BAZ|TST|NEW
{code}

I tried many variations of the select by changing the contents of the 'columns 
to return' parameter.
for example, i event tried a select with constant data by setting Columns to 
return to:
'id','dt','site','app','state'
and I get the same error.

{code:sql}
sqlite> select 'id','dt','site','app','state' from mamtx_watermark where id > 3;
id|dt|site|app|state
id|dt|site|app|state
id|dt|site|app|state
{code}

{code:none}
Unable to execute SQL select query SELECT 'id','dt','site','app','state' FROM 
mamtx_watermark WHERE id > 6 due to 
org.apache.nifi.processor.exception.ProcessEx
ception: Error during database query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: createSchema: Unknown SQL type 0 
/ NULL (table: mamtx_watermark, column: _id_) cannot be converted to Avro type
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:564)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:242)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:303)
... 13 common frames omitted
{code}


was (Author: osunderdog):
Matt, thank you for responding.  Here is the table creation I'm using for the 
sqlite table:

{code:sql}
create table mamtx_watermark (
   id integer primary key autoincrement,
   dt datetime default (datetime('now')),
   site text,
   app text,
   state text
   );
{code}

I tried the SQL presented in the error logs against the sqlite3 database and it 
works as expected:
{code:sql}
sqlite> SELECT id, datetime(dt) as 'dt', site, app, state FROM mamtx_watermark 
WHERE id > 3;
4|2017-09-07 19:27:50|FOO|TST|INCREMENTAL
5|2017-09-07 19:25:43|BAR|TST|NEW
6|2017-09-07 19:25:45|BAZ|TST|NEW
{code}

I tried many variations of the select by changing the contents of the 'columns 
to return' parameter.
for example, i event tried a select with constant data by setting Columns to 
return to:
'id','dt','site','app','state'
and I get the same error.

{code:sql}
sqlite> select 'id','dt','site','app','state' from mamtx_watermark where id > 3;
id|dt|site|app|state
id|dt|site|app|state
id|dt|site|app|state
{code}

{code:error}
Unable to execute SQL select query SELECT 'id','dt','site','app','state' FROM 
mamtx_watermark WHERE id > 6 due to 
org.apache.nifi.processor.exception.ProcessEx
ception: Error during database query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion 

[jira] [Comment Edited] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during databas

2017-09-07 Thread bruce lowther (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157635#comment-16157635
 ] 

bruce lowther edited comment on NIFI-4238 at 9/7/17 8:53 PM:
-

Just to remove most of the database types from the equation, I created table:
{code:sql}
create table footest (first text, second text, third text);
insert into footest (first,second,third) values ('one','two','three');
{code}

when I configure the QueryDatabaseTable, I get similar error:
{code:text}
2017-09-07 20:46:26,468 ERROR [Timer-Driven Process Thread-57] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=015e134c-5c1d-1689-deb5-be21b71befec]
Unable to execute SQL select query SELECT first,second,third FROM footest due 
to org.apache.nifi.processor.exception.ProcessException: Error during database 
qu
ery or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: SQLite JDBC: inconsistent internal 
state
at org.sqlite.core.CoreResultSet.checkCol(CoreResultSet.java:81)
at 
org.sqlite.jdbc3.JDBC3ResultSet.getColumnCount(JDBC3ResultSet.java:699)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:423)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:242)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:303)
... 13 common frames omitted
{code}


When I run this from 'ExecuteSQL' I get a flow file with one row.
{code:none}
[ {
  "first" : "one",
  "second" : "two",
  "third" : "three"
} ]
{code}


was (Author: osunderdog):
Just to remove most of the database types from the equation, I created table:
{code:sql}
create table footest (first text, second text, third text);
insert into footest (first,second,third) values ('one','two','three');
{code}

when I configure the QueryDatabaseTable, I get similar error:
{code:text}
2017-09-07 20:46:26,468 ERROR [Timer-Driven Process Thread-57] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=015e134c-5c1d-1689-deb5-be21b71befec]
Unable to execute SQL select query SELECT first,second,third FROM footest due 
to org.apache.nifi.processor.exception.ProcessException: Error during database 
qu
ery or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at 

[jira] [Comment Edited] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during databas

2017-09-07 Thread bruce lowther (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157635#comment-16157635
 ] 

bruce lowther edited comment on NIFI-4238 at 9/7/17 8:53 PM:
-

Just to remove most of the database types from the equation, I created table:
{code:sql}
create table footest (first text, second text, third text);
insert into footest (first,second,third) values ('one','two','three');
{code}

when I configure the QueryDatabaseTable, I get similar error:
{code:text}
2017-09-07 20:46:26,468 ERROR [Timer-Driven Process Thread-57] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=015e134c-5c1d-1689-deb5-be21b71befec]
Unable to execute SQL select query SELECT first,second,third FROM footest due 
to org.apache.nifi.processor.exception.ProcessException: Error during database 
qu
ery or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: SQLite JDBC: inconsistent internal 
state
at org.sqlite.core.CoreResultSet.checkCol(CoreResultSet.java:81)
at 
org.sqlite.jdbc3.JDBC3ResultSet.getColumnCount(JDBC3ResultSet.java:699)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:423)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:242)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:303)
... 13 common frames omitted
{code}


When I run this from 'ExecuteSQL' I get a flow file with one row.
{code:txt}
[ {
  "first" : "one",
  "second" : "two",
  "third" : "three"
} ]
{code}


was (Author: osunderdog):
Just to remove most of the database types from the equation, I created table:
{code:sql}
create table footest (first text, second text, third text);
insert into footest (first,second,third) values ('one','two','three');
{code}

when I configure the QueryDatabaseTable, I get similar error:
{code:text}
2017-09-07 20:46:26,468 ERROR [Timer-Driven Process Thread-57] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=015e134c-5c1d-1689-deb5-be21b71befec]
Unable to execute SQL select query SELECT first,second,third FROM footest due 
to org.apache.nifi.processor.exception.ProcessException: Error during database 
qu
ery or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at 

[jira] [Commented] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during database que

2017-09-07 Thread bruce lowther (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157635#comment-16157635
 ] 

bruce lowther commented on NIFI-4238:
-

Just to remove most of the database types from the equation, I created table:
{code:sql}
create table footest (first text, second text, third text);
insert into footest (first,second,third) values ('one','two','three');
{code}

when I configure the QueryDatabaseTable, I get similar error:
{code:text}
2017-09-07 20:46:26,468 ERROR [Timer-Driven Process Thread-57] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=015e134c-5c1d-1689-deb5-be21b71befec]
Unable to execute SQL select query SELECT first,second,third FROM footest due 
to org.apache.nifi.processor.exception.ProcessException: Error during database 
qu
ery or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: SQLite JDBC: inconsistent internal 
state
at org.sqlite.core.CoreResultSet.checkCol(CoreResultSet.java:81)
at 
org.sqlite.jdbc3.JDBC3ResultSet.getColumnCount(JDBC3ResultSet.java:699)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:423)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:242)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:303)
... 13 common frames omitted
{code}


When I run this from 'ExecuteSQL' I get a flow file with one row.
{code:json}
[ {
  "first" : "one",
  "second" : "two",
  "third" : "three"
} ]
{code}

> Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select 
> query due to org.apache.nifi.processor.exception.ProcessException: Error 
> during database query or conversion of records to Avro
> 
>
> Key: NIFI-4238
> URL: https://issues.apache.org/jira/browse/NIFI-4238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Centos
>Reporter: Ella
> Attachments: config_file_1.png, config_file_2.png, config_file_3.png, 
> diagram.png, Error.png, QueryDatabaseTableError.png
>
>
> Hi Guys,
> I should retrieve only the new added records from the DB2 database to a file 
> by NiFi's CDC feature--QueryDatabaseTable processor; however, I have 
> encountered the Error during executing my dataflow scenario. I have 
> respectfully attached the snapshot of Error as well as the dataflow; I would 
> really appreciate if someone helped me after all.
> Thanks a lot.
> Sincerely,
> Ella



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during database que

2017-09-07 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157632#comment-16157632
 ] 

Matt Burgess commented on NIFI-4238:


It looks like the driver is returning 0 for the type of the _id_ column, which 
is not a valid Java SQL type: 
http://docs.oracle.com/javase/8/docs/api/constant-values.html#java.sql.Types.BIT

I believe this is because the query returns no rows (which the above appears to 
return none). The SQLite driver doesn't return a type for getColumnType() for 
some reason: https://en.wikibooks.org/wiki/Java_JDBC_using_SQLite/Metadata

We could check for 0 and default the field type to String (shouldn't be too 
troublesome since there won't end up being any records in the flow file), but 
that seems a bit hacky and would only be to support a shortcoming in a 
particular driver.

> Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select 
> query due to org.apache.nifi.processor.exception.ProcessException: Error 
> during database query or conversion of records to Avro
> 
>
> Key: NIFI-4238
> URL: https://issues.apache.org/jira/browse/NIFI-4238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Centos
>Reporter: Ella
> Attachments: config_file_1.png, config_file_2.png, config_file_3.png, 
> diagram.png, Error.png, QueryDatabaseTableError.png
>
>
> Hi Guys,
> I should retrieve only the new added records from the DB2 database to a file 
> by NiFi's CDC feature--QueryDatabaseTable processor; however, I have 
> encountered the Error during executing my dataflow scenario. I have 
> respectfully attached the snapshot of Error as well as the dataflow; I would 
> really appreciate if someone helped me after all.
> Thanks a lot.
> Sincerely,
> Ella



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during database que

2017-09-07 Thread bruce lowther (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157603#comment-16157603
 ] 

bruce lowther commented on NIFI-4238:
-

Matt, thank you for responding.  Here is the table creation I'm using for the 
sqlite table:

{code:sql}
create table mamtx_watermark (
   id integer primary key autoincrement,
   dt datetime default (datetime('now')),
   site text,
   app text,
   state text
   );
{code}

I tried the SQL presented in the error logs against the sqlite3 database and it 
works as expected:
{code:sql}
sqlite> SELECT id, datetime(dt) as 'dt', site, app, state FROM mamtx_watermark 
WHERE id > 3;
4|2017-09-07 19:27:50|FOO|TST|INCREMENTAL
5|2017-09-07 19:25:43|BAR|TST|NEW
6|2017-09-07 19:25:45|BAZ|TST|NEW
{code}

I tried many variations of the select by changing the contents of the 'columns 
to return' parameter.
for example, i event tried a select with constant data by setting Columns to 
return to:
'id','dt','site','app','state'
and I get the same error.

{code:sql}
sqlite> select 'id','dt','site','app','state' from mamtx_watermark where id > 3;
id|dt|site|app|state
id|dt|site|app|state
id|dt|site|app|state
{code}

{code:error}
Unable to execute SQL select query SELECT 'id','dt','site','app','state' FROM 
mamtx_watermark WHERE id > 6 due to 
org.apache.nifi.processor.exception.ProcessEx
ception: Error during database query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:305)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2529)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:299)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1120)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: createSchema: Unknown SQL type 0 
/ NULL (table: mamtx_watermark, column: _id_) cannot be converted to Avro type
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:564)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:242)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$13(QueryDatabaseTable.java:303)
... 13 common frames omitted
{code}

> Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select 
> query due to org.apache.nifi.processor.exception.ProcessException: Error 
> during database query or conversion of records to Avro
> 
>
> Key: NIFI-4238
> URL: https://issues.apache.org/jira/browse/NIFI-4238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Centos
>Reporter: Ella
> Attachments: config_file_1.png, config_file_2.png, config_file_3.png, 
> diagram.png, Error.png, QueryDatabaseTableError.png
>
>
> Hi Guys,
> I should retrieve only the new added records from the DB2 database to a file 
> by NiFi's CDC feature--QueryDatabaseTable processor; however, I have 
> encountered the Error during executing my dataflow scenario. I have 
> respectfully attached the snapshot of Error as well as the dataflow; I would 
> really appreciate if someone helped me after all.
> Thanks a lot.
> Sincerely,
> Ella



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during database que

2017-09-07 Thread Matt Burgess (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157578#comment-16157578
 ] 

Matt Burgess commented on NIFI-4238:


What are the data types for the columns in that table? Also, does that same SQL 
command (from the error log) work from the command-line?

> Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select 
> query due to org.apache.nifi.processor.exception.ProcessException: Error 
> during database query or conversion of records to Avro
> 
>
> Key: NIFI-4238
> URL: https://issues.apache.org/jira/browse/NIFI-4238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Centos
>Reporter: Ella
> Attachments: config_file_1.png, config_file_2.png, config_file_3.png, 
> diagram.png, Error.png, QueryDatabaseTableError.png
>
>
> Hi Guys,
> I should retrieve only the new added records from the DB2 database to a file 
> by NiFi's CDC feature--QueryDatabaseTable processor; however, I have 
> encountered the Error during executing my dataflow scenario. I have 
> respectfully attached the snapshot of Error as well as the dataflow; I would 
> really appreciate if someone helped me after all.
> Thanks a lot.
> Sincerely,
> Ella



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4357) Improve template XML loading globally

2017-09-07 Thread Andy LoPresto (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy LoPresto updated NIFI-4357:

Status: Patch Available  (was: In Progress)

> Improve template XML loading globally
> -
>
> Key: NIFI-4357
> URL: https://issues.apache.org/jira/browse/NIFI-4357
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
> Fix For: 1.4.0
>
>
> We should improve the template loading code (uses JAXBContext). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during database query

2017-09-07 Thread bruce lowther (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bruce lowther updated NIFI-4238:

Attachment: QueryDatabaseTableError.png

SQLITE error using QueryDatabaseTable operator

> Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select 
> query due to org.apache.nifi.processor.exception.ProcessException: Error 
> during database query or conversion of records to Avro
> 
>
> Key: NIFI-4238
> URL: https://issues.apache.org/jira/browse/NIFI-4238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Centos
>Reporter: Ella
> Attachments: config_file_1.png, config_file_2.png, config_file_3.png, 
> diagram.png, Error.png, QueryDatabaseTableError.png
>
>
> Hi Guys,
> I should retrieve only the new added records from the DB2 database to a file 
> by NiFi's CDC feature--QueryDatabaseTable processor; however, I have 
> encountered the Error during executing my dataflow scenario. I have 
> respectfully attached the snapshot of Error as well as the dataflow; I would 
> really appreciate if someone helped me after all.
> Thanks a lot.
> Sincerely,
> Ella



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4238) Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select query due to org.apache.nifi.processor.exception.ProcessException: Error during database que

2017-09-07 Thread bruce lowther (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157454#comment-16157454
 ] 

bruce lowther commented on NIFI-4238:
-

I am also experiencing this issue with SQLITE.


> Error in QueryDatabaseTable (NiFi CDC support): Unable to execute SQL select 
> query due to org.apache.nifi.processor.exception.ProcessException: Error 
> during database query or conversion of records to Avro
> 
>
> Key: NIFI-4238
> URL: https://issues.apache.org/jira/browse/NIFI-4238
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0
> Environment: Centos
>Reporter: Ella
> Attachments: config_file_1.png, config_file_2.png, config_file_3.png, 
> diagram.png, Error.png
>
>
> Hi Guys,
> I should retrieve only the new added records from the DB2 database to a file 
> by NiFi's CDC feature--QueryDatabaseTable processor; however, I have 
> encountered the Error during executing my dataflow scenario. I have 
> respectfully attached the snapshot of Error as well as the dataflow; I would 
> really appreciate if someone helped me after all.
> Thanks a lot.
> Sincerely,
> Ella



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4357) Improve template XML loading globally

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157455#comment-16157455
 ] 

ASF GitHub Bot commented on NIFI-4357:
--

GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/2134

NIFI-4357 Global improvement of XML unmarshalling

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-4357

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2134


commit 0fa6ca17629094845bc9e374f442e3617205f1fd
Author: Andy LoPresto 
Date:   2017-09-06T23:45:13Z

NIFI-4353 Added XmlUtils class.
Added unit test.
Added XXE test resource.

commit 91ff58d038d3afe6a6c1aa13226a2c3050612938
Author: Andy LoPresto 
Date:   2017-09-07T18:57:33Z

NIFI-4357 Refactored JAXB unmarshalling globally to prevent XXE attacks.
Refactored duplicated/legacy code.

commit f2b396eb629f3adadd56eaff8ce9ee245426
Author: Andy LoPresto 
Date:   2017-09-07T19:03:35Z

NIFI-4357 Cleaned up commented code.
Switched from FileInputStream back to StreamSource in AuthorizerFactoryBean.




> Improve template XML loading globally
> -
>
> Key: NIFI-4357
> URL: https://issues.apache.org/jira/browse/NIFI-4357
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Andy LoPresto
>Assignee: Andy LoPresto
> Fix For: 1.4.0
>
>
> We should improve the template loading code (uses JAXBContext). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2134: NIFI-4357 Global improvement of XML unmarshalling

2017-09-07 Thread alopresto
GitHub user alopresto opened a pull request:

https://github.com/apache/nifi/pull/2134

NIFI-4357 Global improvement of XML unmarshalling

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [x] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alopresto/nifi NIFI-4357

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2134


commit 0fa6ca17629094845bc9e374f442e3617205f1fd
Author: Andy LoPresto 
Date:   2017-09-06T23:45:13Z

NIFI-4353 Added XmlUtils class.
Added unit test.
Added XXE test resource.

commit 91ff58d038d3afe6a6c1aa13226a2c3050612938
Author: Andy LoPresto 
Date:   2017-09-07T18:57:33Z

NIFI-4357 Refactored JAXB unmarshalling globally to prevent XXE attacks.
Refactored duplicated/legacy code.

commit f2b396eb629f3adadd56eaff8ce9ee245426
Author: Andy LoPresto 
Date:   2017-09-07T19:03:35Z

NIFI-4357 Cleaned up commented code.
Switched from FileInputStream back to StreamSource in AuthorizerFactoryBean.




---


[jira] [Updated] (NIFI-4257) Allow a custom WHERE clause in AbstractDatabaseFetchProcessor

2017-09-07 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4257:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Allow a custom WHERE clause in AbstractDatabaseFetchProcessor
> -
>
> Key: NIFI-4257
> URL: https://issues.apache.org/jira/browse/NIFI-4257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Fix For: 1.4.0
>
>
> It could be useful allowing a user to set a custom WHERE clause in 
> AbstractDatabaseFetchProcessor in case not all of the data in the table is 
> required.
> In case the WHERE clause is changed after the processor has already been 
> running, the user will probably have to set the initial maximum values to 
> ensure the expected behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4257) Allow a custom WHERE clause in AbstractDatabaseFetchProcessor

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157433#comment-16157433
 ] 

ASF GitHub Bot commented on NIFI-4257:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2050


> Allow a custom WHERE clause in AbstractDatabaseFetchProcessor
> -
>
> Key: NIFI-4257
> URL: https://issues.apache.org/jira/browse/NIFI-4257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Fix For: 1.4.0
>
>
> It could be useful allowing a user to set a custom WHERE clause in 
> AbstractDatabaseFetchProcessor in case not all of the data in the table is 
> required.
> In case the WHERE clause is changed after the processor has already been 
> running, the user will probably have to set the initial maximum values to 
> ensure the expected behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4257) Allow a custom WHERE clause in AbstractDatabaseFetchProcessor

2017-09-07 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-4257:
---
Fix Version/s: 1.4.0

> Allow a custom WHERE clause in AbstractDatabaseFetchProcessor
> -
>
> Key: NIFI-4257
> URL: https://issues.apache.org/jira/browse/NIFI-4257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Fix For: 1.4.0
>
>
> It could be useful allowing a user to set a custom WHERE clause in 
> AbstractDatabaseFetchProcessor in case not all of the data in the table is 
> required.
> In case the WHERE clause is changed after the processor has already been 
> running, the user will probably have to set the initial maximum values to 
> ensure the expected behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2050: NIFI-4257 - add custom WHERE clause in database fet...

2017-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2050


---


[jira] [Commented] (NIFI-4257) Allow a custom WHERE clause in AbstractDatabaseFetchProcessor

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157432#comment-16157432
 ] 

ASF subversion and git services commented on NIFI-4257:
---

Commit c10ff574c4602fe05f5d1dae5eb0b1bd24026c02 in nifi's branch 
refs/heads/master from [~pvillard]
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=c10ff57 ]

NIFI-4257 - add custom WHERE clause in database fetch processors

Signed-off-by: Matthew Burgess 

This closes #2050


> Allow a custom WHERE clause in AbstractDatabaseFetchProcessor
> -
>
> Key: NIFI-4257
> URL: https://issues.apache.org/jira/browse/NIFI-4257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Fix For: 1.4.0
>
>
> It could be useful allowing a user to set a custom WHERE clause in 
> AbstractDatabaseFetchProcessor in case not all of the data in the table is 
> required.
> In case the WHERE clause is changed after the processor has already been 
> running, the user will probably have to set the initial maximum values to 
> ensure the expected behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4257) Allow a custom WHERE clause in AbstractDatabaseFetchProcessor

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157431#comment-16157431
 ] 

ASF GitHub Bot commented on NIFI-4257:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2050
  
+1 LGTM, ran build and unit tests (and contrib-check), also tested both 
GenerateTableFetch and QueryDatabaseTable with MySQL and Oracle DBs and various 
WHERE clauses. Great work! Merging to master


> Allow a custom WHERE clause in AbstractDatabaseFetchProcessor
> -
>
> Key: NIFI-4257
> URL: https://issues.apache.org/jira/browse/NIFI-4257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Fix For: 1.4.0
>
>
> It could be useful allowing a user to set a custom WHERE clause in 
> AbstractDatabaseFetchProcessor in case not all of the data in the table is 
> required.
> In case the WHERE clause is changed after the processor has already been 
> running, the user will probably have to set the initial maximum values to 
> ensure the expected behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2050: NIFI-4257 - add custom WHERE clause in database fetch proc...

2017-09-07 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2050
  
+1 LGTM, ran build and unit tests (and contrib-check), also tested both 
GenerateTableFetch and QueryDatabaseTable with MySQL and Oracle DBs and various 
WHERE clauses. Great work! Merging to master


---


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157423#comment-16157423
 ] 

ASF GitHub Bot commented on NIFI-4345:
--

Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2123
  
@mattyb149 Ok should be good to go now.


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2123: NIFI-4345 Added a MongoDB controller service and a lookup ...

2017-09-07 Thread MikeThomsen
Github user MikeThomsen commented on the issue:

https://github.com/apache/nifi/pull/2123
  
@mattyb149 Ok should be good to go now.


---


[jira] [Assigned] (NIFI-4362) Prometheus Reporting Task

2017-09-07 Thread matt price (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matt price reassigned NIFI-4362:


Assignee: matt price

> Prometheus Reporting Task
> -
>
> Key: NIFI-4362
> URL: https://issues.apache.org/jira/browse/NIFI-4362
> Project: Apache NiFi
>  Issue Type: Task
>  Components: Extensions
>Affects Versions: 1.3.0
>Reporter: matt price
>Assignee: matt price
>Priority: Trivial
>  Labels: features, newbie
>
> Right now Datadog is one of the few external monitoring systems that is 
> supported by Nifi via a reporting task.  We are building a Prometheus 
> reporting task that will report similar metrics as Datadog/processor status 
> history and wanted to contribute this back to the community.
> This is my first contribution to Nifi so please correct me if I'm doing 
> something incorrectly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-4348) isExpressionLanguagePresent throws NPE when attribute is null

2017-09-07 Thread Kay-Uwe Moosheimer (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157296#comment-16157296
 ] 

Kay-Uwe Moosheimer edited comment on NIFI-4348 at 9/7/17 5:38 PM:
--

java.lang.NullPointerException
at 
org.apache.nifi.attribute.expression.language.StandardPropertyValue.isExpressionLanguagePresent(*StandardPropertyValue.java:204*)

Possible fix:
return (isSet() && preparedQuery.isExpressionLanguagePresent());


was (Author: moosheimer):
java.lang.NullPointerException
at 
org.apache.nifi.attribute.expression.language.StandardPropertyValue.isExpressionLanguagePresent(*StandardPropertyValue.java:204*)

Prossible fix:
return (isSet() && preparedQuery.isExpressionLanguagePresent());

> isExpressionLanguagePresent throws NPE when attribute is null
> -
>
> Key: NIFI-4348
> URL: https://issues.apache.org/jira/browse/NIFI-4348
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Kay-Uwe Moosheimer
>Priority: Trivial
>
> The following code throws a NPE:
> PropertyValue property = context.getProperty(SOME_PROPERTY);
> if (property.isExpressionLanguagePresent()) {
> when the property is not set (NULL).
> So I have to write
> PropertyValue property = context.getProperty(SOME_PROPERTY);
> if (property.isSet() && property.isExpressionLanguagePresent()) {
> It would be great if the method isExpressionLanguagePresent() checks for NULL 
> and then return false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (NIFI-4348) isExpressionLanguagePresent throws NPE when attribute is null

2017-09-07 Thread Kay-Uwe Moosheimer (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157296#comment-16157296
 ] 

Kay-Uwe Moosheimer edited comment on NIFI-4348 at 9/7/17 5:38 PM:
--

java.lang.NullPointerException
at 
org.apache.nifi.attribute.expression.language.StandardPropertyValue.isExpressionLanguagePresent(*StandardPropertyValue.java:204*)

Prossible fix:
return (isSet() && preparedQuery.isExpressionLanguagePresent());


was (Author: moosheimer):
java.lang.NullPointerException
at 
org.apache.nifi.attribute.expression.language.StandardPropertyValue.isExpressionLanguagePresent(*StandardPropertyValue.java:204*)


> isExpressionLanguagePresent throws NPE when attribute is null
> -
>
> Key: NIFI-4348
> URL: https://issues.apache.org/jira/browse/NIFI-4348
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Kay-Uwe Moosheimer
>Priority: Trivial
>
> The following code throws a NPE:
> PropertyValue property = context.getProperty(SOME_PROPERTY);
> if (property.isExpressionLanguagePresent()) {
> when the property is not set (NULL).
> So I have to write
> PropertyValue property = context.getProperty(SOME_PROPERTY);
> if (property.isSet() && property.isExpressionLanguagePresent()) {
> It would be great if the method isExpressionLanguagePresent() checks for NULL 
> and then return false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4348) isExpressionLanguagePresent throws NPE when attribute is null

2017-09-07 Thread Kay-Uwe Moosheimer (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157296#comment-16157296
 ] 

Kay-Uwe Moosheimer commented on NIFI-4348:
--

java.lang.NullPointerException
at 
org.apache.nifi.attribute.expression.language.StandardPropertyValue.isExpressionLanguagePresent(*StandardPropertyValue.java:204*)


> isExpressionLanguagePresent throws NPE when attribute is null
> -
>
> Key: NIFI-4348
> URL: https://issues.apache.org/jira/browse/NIFI-4348
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.3.0
>Reporter: Kay-Uwe Moosheimer
>Priority: Trivial
>
> The following code throws a NPE:
> PropertyValue property = context.getProperty(SOME_PROPERTY);
> if (property.isExpressionLanguagePresent()) {
> when the property is not set (NULL).
> So I have to write
> PropertyValue property = context.getProperty(SOME_PROPERTY);
> if (property.isSet() && property.isExpressionLanguagePresent()) {
> It would be great if the method isExpressionLanguagePresent() checks for NULL 
> and then return false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4257) Allow a custom WHERE clause in AbstractDatabaseFetchProcessor

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157275#comment-16157275
 ] 

ASF GitHub Bot commented on NIFI-4257:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2050
  
Done, thanks @mattyb149 !


> Allow a custom WHERE clause in AbstractDatabaseFetchProcessor
> -
>
> Key: NIFI-4257
> URL: https://issues.apache.org/jira/browse/NIFI-4257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> It could be useful allowing a user to set a custom WHERE clause in 
> AbstractDatabaseFetchProcessor in case not all of the data in the table is 
> required.
> In case the WHERE clause is changed after the processor has already been 
> running, the user will probably have to set the initial maximum values to 
> ensure the expected behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2050: NIFI-4257 - add custom WHERE clause in database fetch proc...

2017-09-07 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/2050
  
Done, thanks @mattyb149 !


---


[jira] [Resolved] (NIFI-3218) MockProcessSession should prevent transferring new FlowFile to input queue

2017-09-07 Thread Brandon DeVries (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon DeVries resolved NIFI-3218.
---
Resolution: Fixed

> MockProcessSession should prevent transferring new FlowFile to input queue
> --
>
> Key: NIFI-3218
> URL: https://issues.apache.org/jira/browse/NIFI-3218
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0
>Reporter: Joe Skora
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> StandardProcessSession.transfer() throws an exception if called with a newly 
> created FlowFile and no relationship.  MockProcessionSession should behave 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3218) MockProcessSession should prevent transferring new FlowFile to input queue

2017-09-07 Thread Michael Hogue (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157274#comment-16157274
 ] 

Michael Hogue commented on NIFI-3218:
-

Set fix version to 1.4.0.

> MockProcessSession should prevent transferring new FlowFile to input queue
> --
>
> Key: NIFI-3218
> URL: https://issues.apache.org/jira/browse/NIFI-3218
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0
>Reporter: Joe Skora
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> StandardProcessSession.transfer() throws an exception if called with a newly 
> created FlowFile and no relationship.  MockProcessionSession should behave 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-3218) MockProcessSession should prevent transferring new FlowFile to input queue

2017-09-07 Thread Michael Hogue (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Hogue updated NIFI-3218:

Fix Version/s: 1.4.0

> MockProcessSession should prevent transferring new FlowFile to input queue
> --
>
> Key: NIFI-3218
> URL: https://issues.apache.org/jira/browse/NIFI-3218
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0
>Reporter: Joe Skora
>Assignee: Michael Hogue
> Fix For: 1.4.0
>
>
> StandardProcessSession.transfer() throws an exception if called with a newly 
> created FlowFile and no relationship.  MockProcessionSession should behave 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3218) MockProcessSession should prevent transferring new FlowFile to input queue

2017-09-07 Thread Brandon DeVries (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157266#comment-16157266
 ] 

Brandon DeVries commented on NIFI-3218:
---

[~m-hogue] can you set a "fix version"?

> MockProcessSession should prevent transferring new FlowFile to input queue
> --
>
> Key: NIFI-3218
> URL: https://issues.apache.org/jira/browse/NIFI-3218
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0
>Reporter: Joe Skora
>Assignee: Michael Hogue
>
> StandardProcessSession.transfer() throws an exception if called with a newly 
> created FlowFile and no relationship.  MockProcessionSession should behave 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157256#comment-16157256
 ] 

ASF GitHub Bot commented on NIFI-3484:
--

Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1513
  
This is now merged with #2091. Can you close this one @patricker ? Thanks!


> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.4.0
>
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #1513: NIFI-3484 GenerateTableFetch Should Allow for Right Bounda...

2017-09-07 Thread pvillard31
Github user pvillard31 commented on the issue:

https://github.com/apache/nifi/pull/1513
  
This is now merged with #2091. Can you close this one @patricker ? Thanks!


---


[jira] [Created] (NIFI-4364) InfluxDB ControllerService, PutInfluxDB Processor, and ReportingTask

2017-09-07 Thread Richard St. John (JIRA)
Richard St. John created NIFI-4364:
--

 Summary: InfluxDB ControllerService, PutInfluxDB Processor, and 
ReportingTask
 Key: NIFI-4364
 URL: https://issues.apache.org/jira/browse/NIFI-4364
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Richard St. John


My team has been working storing metric data in influxDB.  As such, we created 
an InfluxDB service, PutInfluxDB processor and a ReportingTask that sends data 
to influxDB.  Is this something that others could benefit from?  If so, we 
could contribute these additions to the NiFi codebase.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4363) Parameterize heap allocation in NiFi Toolkit scripts

2017-09-07 Thread Jeff Storck (JIRA)
Jeff Storck created NIFI-4363:
-

 Summary: Parameterize heap allocation in NiFi Toolkit scripts
 Key: NIFI-4363
 URL: https://issues.apache.org/jira/browse/NIFI-4363
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Tools and Build
Affects Versions: 1.3.0
Reporter: Jeff Storck
Assignee: Jeff Storck


Replace hardcoded heap allocation of java in scripts with parameterized values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFIREG-17) Clean up NiFi Registry REST API generated docs.

2017-09-07 Thread Scott Aslan (JIRA)
Scott Aslan created NIFIREG-17:
--

 Summary: Clean up NiFi Registry REST API generated docs.
 Key: NIFIREG-17
 URL: https://issues.apache.org/jira/browse/NIFIREG-17
 Project: NiFi Registry
  Issue Type: Bug
Reporter: Scott Aslan
Priority: Minor


The documentation generated by the nifi-registry-web-api build are out of sync 
with the swagger.json API definition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (NIFI-4362) Prometheus Reporting Task

2017-09-07 Thread matt price (JIRA)
matt price created NIFI-4362:


 Summary: Prometheus Reporting Task
 Key: NIFI-4362
 URL: https://issues.apache.org/jira/browse/NIFI-4362
 Project: Apache NiFi
  Issue Type: Task
  Components: Extensions
Affects Versions: 1.3.0
Reporter: matt price
Priority: Trivial


Right now Datadog is one of the few external monitoring systems that is 
supported by Nifi via a reporting task.  We are building a Prometheus reporting 
task that will report similar metrics as Datadog/processor status history and 
wanted to contribute this back to the community.

This is my first contribution to Nifi so please correct me if I'm doing 
something incorrectly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4257) Allow a custom WHERE clause in AbstractDatabaseFetchProcessor

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157084#comment-16157084
 ] 

ASF GitHub Bot commented on NIFI-4257:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2050
  
Do you mind rebasing this against the latest master? the code LGTM so will 
run it once it builds cleanly, and merge if all is well, thanks in advance!


> Allow a custom WHERE clause in AbstractDatabaseFetchProcessor
> -
>
> Key: NIFI-4257
> URL: https://issues.apache.org/jira/browse/NIFI-4257
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Reporter: Pierre Villard
>Assignee: Pierre Villard
>
> It could be useful allowing a user to set a custom WHERE clause in 
> AbstractDatabaseFetchProcessor in case not all of the data in the table is 
> required.
> In case the WHERE clause is changed after the processor has already been 
> running, the user will probably have to set the initial maximum values to 
> ensure the expected behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2050: NIFI-4257 - add custom WHERE clause in database fetch proc...

2017-09-07 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2050
  
Do you mind rebasing this against the latest master? the code LGTM so will 
run it once it builds cleanly, and merge if all is well, thanks in advance!


---


[jira] [Updated] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-09-07 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-3484:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.4.0
>
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157062#comment-16157062
 ] 

ASF GitHub Bot commented on NIFI-3484:
--

Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2091


> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.4.0
>
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157060#comment-16157060
 ] 

ASF subversion and git services commented on NIFI-3484:
---

Commit ae30c7f35013e1faf26c6bd3af122362fa4b361e in nifi's branch 
refs/heads/master from patricker
[ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=ae30c7f ]

NIFI-3484: GenerateTableFetch Should Allow for Right Boundary

fix checkstyle issue, and added unit test showing data duplication issue, 
removed property

Signed-off-by: Matthew Burgess 

This closes #2091


> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.4.0
>
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157059#comment-16157059
 ] 

ASF GitHub Bot commented on NIFI-3484:
--

Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2091
  
+1 LGTM, built and ran unit tests, also verified the correct behavior on 
MySQL, Oracle 11, and Postgres DBs. Thanks for the improvement! Merging to 
master


> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.4.0
>
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2091: NIFI-3484 GenerateTableFetch Should Allow for Right...

2017-09-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/nifi/pull/2091


---


[jira] [Updated] (NIFI-3484) GenerateTableFetch Should Allow for Right Boundary

2017-09-07 Thread Matt Burgess (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-3484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Burgess updated NIFI-3484:
---
Fix Version/s: 1.4.0

> GenerateTableFetch Should Allow for Right Boundary
> --
>
> Key: NIFI-3484
> URL: https://issues.apache.org/jira/browse/NIFI-3484
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.2.0
>Reporter: Peter Wicks
>Assignee: Peter Wicks
>Priority: Minor
> Fix For: 1.4.0
>
>
> When using GenerateTableFetch it places no right hand boundary on pages of 
> data.  This can lead to issues when the statement says to get the next 1000 
> records greater then a specific key, but records were added to the table 
> between the time the processor executed and when the SQL is being executed. 
> As a result it pulls in records that did not exist when the processor was 
> run.  On the next execution of the processor these records will be pulled in 
> a second time.
> Example:
> Partition Size = 1000
> First run (no state): Count(*)=4700 and MAX(ID)=4700.
> 5 FlowFiles are generated, the last one will say to fetch 1000, not 700. (But 
> I don't think this is really a bug, just an observation).
> 5 Flow Files are now in queue to be executed by ExecuteSQL.  Before the 5th 
> file can execute 400 new rows are added to the table.  When the final SQL 
> statement is executed 300 extra records, with higher ID values, will also be 
> pulled into NiFi.
> Second run (state: ID=4700).  Count(*) ID>4700 = 400 and MAX(ID)=5100.
> 1 Flow File is generated, but includes 300 records already pulled into NiFI.
> The solution is to have an optional property that will let users use the new 
> MAX(ID) as a right boundary when generating queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi issue #2091: NIFI-3484 GenerateTableFetch Should Allow for Right Bounda...

2017-09-07 Thread mattyb149
Github user mattyb149 commented on the issue:

https://github.com/apache/nifi/pull/2091
  
+1 LGTM, built and ran unit tests, also verified the correct behavior on 
MySQL, Oracle 11, and Postgres DBs. Thanks for the improvement! Merging to 
master


---


[GitHub] nifi issue #2119: NIFI-4341 - add provenance repository storage usage in UI

2017-09-07 Thread mcgilman
Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2119
  
Will review...


---


[jira] [Commented] (NIFI-4341) Display provenance repository storage usage in UI

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157051#comment-16157051
 ] 

ASF GitHub Bot commented on NIFI-4341:
--

Github user mcgilman commented on the issue:

https://github.com/apache/nifi/pull/2119
  
Will review...


> Display provenance repository storage usage in UI
> -
>
> Key: NIFI-4341
> URL: https://issues.apache.org/jira/browse/NIFI-4341
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: Pierre Villard
>Assignee: Pierre Villard
> Attachments: clusterView.png, systemDiagView.png
>
>
> Just like we have storage usage information for flow file repository and 
> content repository, it'd be interesting to display the same information for 
> provenance repository in system diagnostic view and cluster view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4345) Add a controller service and a lookup service for MongoDB

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157030#comment-16157030
 ] 

ASF GitHub Bot commented on NIFI-4345:
--

Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2123#discussion_r137559094
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/AbstractMongoDBControllerService.java
 ---
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.mongodb;
+
+import com.mongodb.MongoClient;
+import com.mongodb.MongoClientOptions;
+import com.mongodb.MongoClientOptions.Builder;
+import com.mongodb.MongoClientURI;
+import com.mongodb.WriteConcern;
+import com.mongodb.client.MongoCollection;
+import com.mongodb.client.MongoDatabase;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.authentication.exception.ProviderCreationException;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.security.util.SslContextFactory;
+import org.apache.nifi.ssl.SSLContextService;
+import org.bson.Document;
+
+import javax.net.ssl.SSLContext;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+public class AbstractMongoDBControllerService extends 
AbstractControllerService {
+static final String WRITE_CONCERN_ACKNOWLEDGED = "ACKNOWLEDGED";
+static final String WRITE_CONCERN_UNACKNOWLEDGED = "UNACKNOWLEDGED";
+static final String WRITE_CONCERN_FSYNCED = "FSYNCED";
+static final String WRITE_CONCERN_JOURNALED = "JOURNALED";
+static final String WRITE_CONCERN_REPLICA_ACKNOWLEDGED = 
"REPLICA_ACKNOWLEDGED";
+static final String WRITE_CONCERN_MAJORITY = "MAJORITY";
+
+protected static final PropertyDescriptor URI = new 
PropertyDescriptor.Builder()
+.name("Mongo URI")
--- End diff --

That was copy pasta from the GetMongo processor (so that might need a 
second look), but I'll fix it here.


> Add a controller service and a lookup service for MongoDB
> -
>
> Key: NIFI-4345
> URL: https://issues.apache.org/jira/browse/NIFI-4345
> Project: Apache NiFi
>  Issue Type: Improvement
>Reporter: Mike Thomsen
>
> - Create a Controller Service that wraps the functionality of the Mongo 
> driver.
> - Create a lookup service that can return elements based on a query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi pull request #2123: NIFI-4345 Added a MongoDB controller service and a ...

2017-09-07 Thread MikeThomsen
Github user MikeThomsen commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2123#discussion_r137559094
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-mongodb-services-bundle/nifi-mongodb-services/src/main/java/org/apache/nifi/mongodb/AbstractMongoDBControllerService.java
 ---
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.mongodb;
+
+import com.mongodb.MongoClient;
+import com.mongodb.MongoClientOptions;
+import com.mongodb.MongoClientOptions.Builder;
+import com.mongodb.MongoClientURI;
+import com.mongodb.WriteConcern;
+import com.mongodb.client.MongoCollection;
+import com.mongodb.client.MongoDatabase;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.nifi.annotation.lifecycle.OnStopped;
+import org.apache.nifi.authentication.exception.ProviderCreationException;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.security.util.SslContextFactory;
+import org.apache.nifi.ssl.SSLContextService;
+import org.bson.Document;
+
+import javax.net.ssl.SSLContext;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+public class AbstractMongoDBControllerService extends 
AbstractControllerService {
+static final String WRITE_CONCERN_ACKNOWLEDGED = "ACKNOWLEDGED";
+static final String WRITE_CONCERN_UNACKNOWLEDGED = "UNACKNOWLEDGED";
+static final String WRITE_CONCERN_FSYNCED = "FSYNCED";
+static final String WRITE_CONCERN_JOURNALED = "JOURNALED";
+static final String WRITE_CONCERN_REPLICA_ACKNOWLEDGED = 
"REPLICA_ACKNOWLEDGED";
+static final String WRITE_CONCERN_MAJORITY = "MAJORITY";
+
+protected static final PropertyDescriptor URI = new 
PropertyDescriptor.Builder()
+.name("Mongo URI")
--- End diff --

That was copy pasta from the GetMongo processor (so that might need a 
second look), but I'll fix it here.


---


[jira] [Commented] (NIFI-3218) MockProcessSession should prevent transferring new FlowFile to input queue

2017-09-07 Thread Brandon DeVries (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157029#comment-16157029
 ] 

Brandon DeVries commented on NIFI-3218:
---

should this ticket be closed?

> MockProcessSession should prevent transferring new FlowFile to input queue
> --
>
> Key: NIFI-3218
> URL: https://issues.apache.org/jira/browse/NIFI-3218
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.1.0, 0.8.0
>Reporter: Joe Skora
>Assignee: Michael Hogue
>
> StandardProcessSession.transfer() throws an exception if called with a newly 
> created FlowFile and no relationship.  MockProcessionSession should behave 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (NIFI-4361) Server fails to start during recovery upon full disk

2017-09-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157017#comment-16157017
 ] 

ASF GitHub Bot commented on NIFI-4361:
--

GitHub user gresockj opened a pull request:

https://github.com/apache/nifi/pull/2133

NIFI-4361: Fixing early recovery shutdown due to EOF in walog partition

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [x] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [x] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [x] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [x] Is your initial contribution a single, squashed commit?

### For code changes:
- [x] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gresockj/nifi NIFI-4361

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/2133.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2133


commit 7457228bcfa7a2ec862f5395983fc4c5c3032faf
Author: Joe Gresock 
Date:   2017-09-07T14:32:43Z

NIFI-4361: Fixing early recovery shutdown due to EOF in walog partition




> Server fails to start during recovery upon full disk
> 
>
> Key: NIFI-4361
> URL: https://issues.apache.org/jira/browse/NIFI-4361
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.2.0, 1.3.0
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
> Fix For: 1.4.0
>
>
> Our disk filled up -- we then freed up some space and restarted, but the 
> server failed to start up due to:
> ERROR [main] o.a.nifi.controller.StandardFlowService Failed to load flow from 
> cluster due to: org.apache.nifi.cluster.ConnectionException: Failed to 
> connect node to cluster due to: java.lang.IllegalStateException: Signaled end 
> to recovery, but there are more recovery files for Partition in directory 
> /data/nifi/flowfile_repository/partition-8
> at 
> org.wali.MinimalLockingWriteAheadLog$Partition.endRecovery(MinimalLockingWriteAheadLog.java:1047)
>  ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]
> at 
> org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:487)
>  ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]
> at 
> org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
>  ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (NIFI-4361) Server fails to start during recovery upon full disk

2017-09-07 Thread Joseph Gresock (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Gresock updated NIFI-4361:
-
Status: Patch Available  (was: In Progress)

> Server fails to start during recovery upon full disk
> 
>
> Key: NIFI-4361
> URL: https://issues.apache.org/jira/browse/NIFI-4361
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.2.0, 1.1.0
>Reporter: Joseph Gresock
>Assignee: Joseph Gresock
> Fix For: 1.4.0
>
>
> Our disk filled up -- we then freed up some space and restarted, but the 
> server failed to start up due to:
> ERROR [main] o.a.nifi.controller.StandardFlowService Failed to load flow from 
> cluster due to: org.apache.nifi.cluster.ConnectionException: Failed to 
> connect node to cluster due to: java.lang.IllegalStateException: Signaled end 
> to recovery, but there are more recovery files for Partition in directory 
> /data/nifi/flowfile_repository/partition-8
> at 
> org.wali.MinimalLockingWriteAheadLog$Partition.endRecovery(MinimalLockingWriteAheadLog.java:1047)
>  ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]
> at 
> org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:487)
>  ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]
> at 
> org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
>  ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] nifi-minifi-cpp pull request #134: MINIFI-339: Add C2 base allowing for 1 pr...

2017-09-07 Thread achristianson
Github user achristianson commented on a diff in the pull request:

https://github.com/apache/nifi-minifi-cpp/pull/134#discussion_r137557066
  
--- Diff: CMakeLists.txt ---
@@ -35,6 +35,8 @@ ENDIF(POLICY CMP0048)
 include(CheckCXXCompilerFlag)
 CHECK_CXX_COMPILER_FLAG("-std=c++11 " COMPILER_SUPPORTS_CXX11)
 CHECK_CXX_COMPILER_FLAG("-std=c++0x " COMPILER_SUPPORTS_CXX0X)
+SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ")
+SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS}")
--- End diff --

What are these for?


---


[jira] [Created] (NIFI-4361) Server fails to start during recovery upon full disk

2017-09-07 Thread Joseph Gresock (JIRA)
Joseph Gresock created NIFI-4361:


 Summary: Server fails to start during recovery upon full disk
 Key: NIFI-4361
 URL: https://issues.apache.org/jira/browse/NIFI-4361
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.3.0, 1.2.0, 1.1.0
Reporter: Joseph Gresock
Assignee: Joseph Gresock
 Fix For: 1.4.0


Our disk filled up -- we then freed up some space and restarted, but the server 
failed to start up due to:

ERROR [main] o.a.nifi.controller.StandardFlowService Failed to load flow from 
cluster due to: org.apache.nifi.cluster.ConnectionException: Failed to connect 
node to cluster due to: java.lang.IllegalStateException: Signaled end to 
recovery, but there are more recovery files for Partition in directory 
/data/nifi/flowfile_repository/partition-8

at 
org.wali.MinimalLockingWriteAheadLog$Partition.endRecovery(MinimalLockingWriteAheadLog.java:1047)
 ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]
at 
org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:487)
 ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]
at 
org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
 ~[nifi-write-ahead-log-1.1.0.jar:1.1.0]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (NIFI-2184) JettyServer should confirm "docs" path exists before using it in .createDocsWebApp().

2017-09-07 Thread Mark Owens (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-2184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Owens reassigned NIFI-2184:


Assignee: Mark Owens

> JettyServer should confirm "docs" path exists before using it in 
> .createDocsWebApp().
> -
>
> Key: NIFI-2184
> URL: https://issues.apache.org/jira/browse/NIFI-2184
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework, Documentation & Website
>Affects Versions: 1.0.0, 0.7.0
> Environment: Tested with 0.7.0-SNAPSHOT.  Looks like it will occur 
> with 1.x, but that is not confirmed.
>Reporter: Joe Skora
>Assignee: Mark Owens
>Priority: Minor
>  Labels: easyfix
>
> Application throws exception and startup fails with "Resource directory paths 
> are malformed: docs" if configured docs directory does not exist.
> Ideally it should startup without online documentation, but if it doesn't 
> start an explicit log message and possibly a message to the console should 
> explain that the directory is missing.
> {code}
> 2016-07-06 13:30:13,840 ERROR [main] org.apache.nifi.NiFi Failure to launch 
> NiFi due to java.lang.reflect.InvocationTargetException
> java.lang.reflect.InvocationTargetException: null
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method) ~[na:1.7.0_80]
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  ~[na:1.7.0_80]
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ~[na:1.7.0_80]
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
> ~[na:1.7.0_80]
> at org.apache.nifi.NiFi.(NiFi.java:131) 
> ~[nifi-runtime-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at org.apache.nifi.NiFi.main(NiFi.java:227) 
> ~[nifi-runtime-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> Caused by: java.lang.IllegalStateException: Resource directory paths are 
> malformed: docs
> at 
> org.apache.nifi.web.server.JettyServer.createDocsWebApp(JettyServer.java:553) 
> ~[nifi-jetty-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.web.server.JettyServer.loadWars(JettyServer.java:337) 
> ~[nifi-jetty-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> at 
> org.apache.nifi.web.server.JettyServer.(JettyServer.java:140) 
> ~[nifi-jetty-0.7.0-SNAPSHOT.jar:0.7.0-SNAPSHOT]
> ... 6 common frames omitted
> 2016-07-06 13:30:13,841 INFO [Thread-1] org.apache.nifi.NiFi Initiating 
> shutdown of Jetty web server...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)