[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r193733115
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/schema/access/SchemaAccessUtils.java
 ---
@@ -176,6 +176,8 @@ public static SchemaAccessStrategy 
getSchemaAccessStrategy(final String allowabl
 return new 
HortonworksAttributeSchemaReferenceStrategy(schemaRegistry);
 } else if 
(allowableValue.equalsIgnoreCase(CONFLUENT_ENCODED_SCHEMA.getValue())) {
 return new ConfluentSchemaRegistryStrategy(schemaRegistry);
+} else if 
(allowableValue.equalsIgnoreCase(INFER_SCHEMA.getValue())) {
--- End diff --

Since this inference only works when the content is JSON, I think this 
option should only be available when using a JSON related record reader, and 
not available in the default case. 

This would be similar to how the AvroReader makes available the option for 
"Embedded Avro Schema" - 
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroReader.java#L63




---


[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r193734191
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-standard-record-utils/src/main/java/org/apache/nifi/schema/access/JsonSchemaAccessStrategy.java
 ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.schema.access;
+
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.Map;
+
+public interface JsonSchemaAccessStrategy extends SchemaAccessStrategy {
--- End diff --

Can this be done without introducing a new method to the interface?

The original interface has:
`getSchema(Map variables, InputStream contentStream, 
RecordSchema readSchema`

Since we know the content has to be json in this case, can't we read 
contentStream into the Map in the implementation of the access 
strategy, rather than requiring callers to do that first?


---


[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r193735509
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-avro-record-utils/src/main/java/org/apache/nifi/serialization/JsonInferenceSchemaRegistryService.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.serialization;
+
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.schema.access.JsonSchemaAccessStrategy;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.nifi.schema.access.SchemaAccessUtils.INFER_SCHEMA;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_BRANCH_NAME;
+import static org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_NAME;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_NAME_PROPERTY;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_REGISTRY;
+import static org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_TEXT;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_TEXT_PROPERTY;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_VERSION;
+
+public class JsonInferenceSchemaRegistryService extends 
SchemaRegistryService {
--- End diff --

I'm not totally sure about this, but I think if we take the approach 
mentioned in my other comments, we probably wouldn't need this class since the 
JSON readers would handle the logic for when schemaAccess is set to "JSON 
Inference", similar to how AvroReader handles when embedded schema is selected 
- 
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-services-bundle/nifi-record-serialization-services/src/main/java/org/apache/nifi/avro/AvroReader.java#L78


---


[GitHub] nifi pull request #2619: NIFI-5059 Updated MongoDBLookupService to be able t...

2018-06-07 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/2619#discussion_r193739859
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-standard-record-utils/src/main/java/org/apache/nifi/schema/access/JsonSchemaAccessStrategy.java
 ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.schema.access;
+
+import org.apache.nifi.serialization.record.RecordSchema;
+
+import java.io.IOException;
+import java.util.Map;
+
+public interface JsonSchemaAccessStrategy extends SchemaAccessStrategy {
--- End diff --

Ok but I'm confused because I'm not seeing an actual call that uses the new 
method...

The MongoLookupService does this:
```

private RecordSchema loadSchema(Map coordinates, Document 
doc) {
+Map variables = coordinates.entrySet().stream()
+.collect(Collectors.toMap(
+e -> e.getKey(),
+e -> e.getValue().toString()
+));
+ObjectMapper mapper = new ObjectMapper();
+try {
+byte[] bytes = mapper.writeValueAsBytes(doc);
+return getSchema(variables, new ByteArrayInputStream(bytes), 
null);
+} catch (Exception ex) {
+return null;
+}
+}

So since we are reserializing the Doc here and putting the coordinates as 
variables, I'm not seeing where we call the new method, but I may be missing it.
```


---


[GitHub] nifi issue #2619: NIFI-5059 Updated MongoDBLookupService to be able to detec...

2018-06-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/2619
  
I haven't gone too deep looking at this, but if the goal is to have a 
re-usable way to infer a schema from JSON across various NoSQL components, have 
we considered just putting some utility code in a JAR somewhere under 
nifi-nar-bundles/nifi-extension-utils rather than trying to hook into the 
SchemaAccessStrategy/SchemaRegistryService?

I'm just on the fence about whether the schema access stuff makes sense 
here since that was designed for the readers/writers, and this is really coming 
from a different angle of already having some Map object in memory.




---


[GitHub] nifi issue #1648: NIFI-2940: Accessing history for deleted components

2017-04-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1648
  
Will review...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1648: NIFI-2940: Accessing history for deleted components

2017-04-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1648
  
+1 verified that the access to the history of deleted components is now 
controlled by access to the controller, otherwise controlled by policies of the 
existing component, verified secure and unsecure


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1647: NIFI-3664: - UI Timestamp Issue

2017-04-05 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1647
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1647: NIFI-3664: - UI Timestamp Issue

2017-04-05 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1647
  
+1 Verified that before this change a local cluster was showing the time as 
EST even though running 'date' on my laptop reported EDT, this was causing the 
time to be off by an hour in the NiFi UI. After this change, NiFi respects my 
system time and shows EDT.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1606: NIFI-3528 Added support for keytab/principal to Kafka 0.10...

2017-04-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1606
  
+1 Verified this is working correctly, much nicer than configured the JAAS 
file in bootstrap.conf, will merge to master!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1654: NIFI-3678: Ensure that we catch EOFException when reading ...

2017-04-06 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1654
  
Will review...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1645: NIFI-3644 - Added HBase_1_1_2_ClientMapCacheService

2017-04-06 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1645#discussion_r110264255
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-hbase_1_1_2-client-service-bundle/nifi-hbase_1_1_2-client-service/src/main/java/org/apache/nifi/hbase/HBase_1_1_2_ClientMapCacheService.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.hbase;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.SeeAlso;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+
+import org.apache.nifi.distributed.cache.client.DistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import java.io.ByteArrayOutputStream;
+import org.apache.nifi.reporting.InitializationException;
+
+import java.nio.charset.StandardCharsets;
+import org.apache.nifi.hbase.scan.ResultCell;
+import org.apache.nifi.hbase.scan.ResultHandler;
+import org.apache.nifi.hbase.scan.Column;
+import org.apache.nifi.hbase.put.PutColumn;
+
+
+import org.apache.nifi.processor.util.StandardValidators;
+
+@Tags({"distributed", "cache", "state", "map", "cluster","hbase"})
+@SeeAlso(classNames = 
{"org.apache.nifi.distributed.cache.server.map.DistributedMapCacheClient", 
"org.apache.nifi.hbase.HBase_1_1_2_ClientService"})
+@CapabilityDescription("Provides the ability to use an HBase table as a 
cache, in place of a DistributedMapCache."
++ " Uses a HBase_1_1_2_ClientService controller to communicate with 
HBase.")
+
+public class HBase_1_1_2_ClientMapCacheService extends 
AbstractControllerService implements DistributedMapCacheClient {
+
+static final PropertyDescriptor HBASE_CLIENT_SERVICE = new 
PropertyDescriptor.Builder()
+.name("HBase Client Service")
+.description("Specifies the HBase Client Controller Service to use 
for accessing HBase.")
+.required(true)
+.identifiesControllerService(HBaseClientService.class)
+.build();
+
+public static final PropertyDescriptor HBASE_CACHE_TABLE_NAME = new 
PropertyDescriptor.Builder()
+.name("HBase Cache Table Name")
+.description("Name of the table on HBase to use for the cache.")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor HBASE_COLUMN_FAMILY = new 
PropertyDescriptor.Builder()
+.name("HBase Column Family")
+.description("Name of the column family on HBase to use for the 
cache.")
+.required(true)
+.defaultValue("f")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+public static final PropertyDescriptor HBASE_COLUMN_QUALIFIER = new 
PropertyDescriptor.Builder()
+.name("HBase Column Qualifier")
+.description("Name of the column qualifier on HBase to use for the 
cache")
+.defaultValue("q")
+.required(true)
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+final List descriptors = new ArrayList<>();
+des

[GitHub] nifi issue #1656: NIFI-3678

2017-04-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1656
  
+1 looks good, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1653: NIFI-2940: Fixing broken integration test following change...

2017-04-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1653
  
+1 looks good, verified the tests pass now


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1673: NIFI-3706: Removing non-existent resource when converting ...

2017-04-18 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1673
  
Looks good, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1694: NIFI-3738 Fixed NPE when ListenSyslog UDP datagram has zer...

2017-04-25 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1694
  
+1 looks good and will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1712: NIFI-3724 - Add Put/Fetch Parquet Processors

2017-04-27 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1712

NIFI-3724 - Add Put/Fetch Parquet Processors

This PR adds a new nifi-parquet-bundle with PutParquet and FetchParquet 
processors. These work similar to PutHDFS and FetchHDFS, but instead read and 
write Records.

While working on this I needed to reuse portions of the record 
reader/writer code, and thus refactored some of the project structure which 
caused many files to move around.

Summary of changes:
- Created nifi-parquet-bundle
- Created nifi-commons/nifi-record to hold domain/API related to records
- Created nifi-nar-bundles/nifi-extension-utils as a place for utility code 
specific to extensions
- Moved nifi-commons/nifi-processor-utils under nifi-extension-utils
- Moved nifi-commons/nifi-hadoop-utils under nifi-extension-utils
- Create nifi-extension-utils/nifi-record-utils for utility code related 
records

To test the Parquet processors you can create a core-site.xml with a local 
file system and read/write parquet to local directories:

```


fs.defaultFS
file:///


```


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi parquet-bundle

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1712.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1712


commit a35e5957f5ff8c47df5352b7b1a5ef494fed8633
Author: Bryan Bende 
Date:   2017-04-12T22:25:31Z

NIFI-3724 - Initial commit of Parquet bundle with PutParquet and 
FetchParquet
- Creating nifi-records-utils to share utility code from record services
- Refactoring Parquet tests to use MockRecorderParser and MockRecordWriter
- Refactoring AbstractPutHDFSRecord to use schema access strategy
- Adding custom validate to AbstractPutHDFSRecord and adding handling of 
UNION types when writing Records as Avro
- Refactoring project structure to get CS API references out of 
nifi-commons, introducing nifi-extension-utils under nifi-nar-bundles




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1711: NIFI-3755 - Restoring Hive exception handling behavior

2017-04-28 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1711
  
Looks good, tested this out and the error handling appears to work 
correctly now, will merge to master and include for 1.2, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1660: NIFI-3674: Implementing SiteToSiteStatusReportingTa...

2017-04-28 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1660#discussion_r113976234
  
--- Diff: 
nifi-nar-bundles/nifi-site-to-site-reporting-bundle/nifi-site-to-site-reporting-task/src/main/java/org/apache/nifi/reporting/SiteToSiteStatusReportingTask.java
 ---
@@ -0,0 +1,415 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting;
+
+import java.io.IOException;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.nio.charset.StandardCharsets;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TimeZone;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+
+import javax.json.Json;
+import javax.json.JsonArray;
+import javax.json.JsonArrayBuilder;
+import javax.json.JsonBuilderFactory;
+import javax.json.JsonObjectBuilder;
+import javax.json.JsonValue;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.status.ConnectionStatus;
+import org.apache.nifi.controller.status.PortStatus;
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.controller.status.ProcessorStatus;
+import org.apache.nifi.controller.status.RemoteProcessGroupStatus;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.remote.Transaction;
+import org.apache.nifi.remote.TransferDirection;
+
+@Tags({"status", "metrics", "history", "site", "site to site"})
+@CapabilityDescription("Publishes Status events using the Site To Site 
protocol.  "
++ "The component type and name filter regexes form a union: only 
components matching both regexes will be reported.  "
++ "However, all process groups are recursively searched for 
matching components, regardless of whether the process group matches the 
component filters.")
+public class SiteToSiteStatusReportingTask extends 
AbstractSiteToSiteReportingTask {
+
+static final String TIMESTAMP_FORMAT = "-MM-dd'T'HH:mm:ss.SSS'Z'";
+
+static final PropertyDescriptor PLATFORM = new 
PropertyDescriptor.Builder()
+.name("Platform")
+.description("The value to use for the platform field in each 
provenance event.")
+.required(true)
+.expressionLanguageSupported(true)
+.defaultValue("nifi")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+static final PropertyDescriptor COMPONENT_TYPE_FILTER_REGEX = new 
PropertyDescriptor.Builder()
+.name("Component Type Filter Regex")
+.description("A regex specifying which component types to report.  
Any component type matching this regex will be included.  "
++ "Component types are: Processor, RootProcessGroup, 
ProcessGroup, RemoteProcessGroup, Connection, InputPort, OutputPort")
+.required(true)
+.expressionLanguageSupported(true)
+
.defaultValue("(Processor|ProcessGroup|RemoteProcessGroup|RootProcessGroup|Connection|InputPort|OutputPort)")
+.addValidator(StandardValidators.REGULAR_EXPRESSION_VALIDATOR)
+.build();
+static final PropertyDescriptor COMPONENT_NAME_FILTER_REGEX = new 
PropertyDescriptor.Builder()
+.name("Component Name Filter Regex")
+.descripti

[GitHub] nifi pull request #1660: NIFI-3674: Implementing SiteToSiteStatusReportingTa...

2017-04-28 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1660#discussion_r113976097
  
--- Diff: 
nifi-nar-bundles/nifi-site-to-site-reporting-bundle/nifi-site-to-site-reporting-task/src/main/java/org/apache/nifi/reporting/SiteToSiteStatusReportingTask.java
 ---
@@ -0,0 +1,415 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.nifi.reporting;
+
+import java.io.IOException;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.nio.charset.StandardCharsets;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TimeZone;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.regex.Pattern;
+
+import javax.json.Json;
+import javax.json.JsonArray;
+import javax.json.JsonArrayBuilder;
+import javax.json.JsonBuilderFactory;
+import javax.json.JsonObjectBuilder;
+import javax.json.JsonValue;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.controller.status.ConnectionStatus;
+import org.apache.nifi.controller.status.PortStatus;
+import org.apache.nifi.controller.status.ProcessGroupStatus;
+import org.apache.nifi.controller.status.ProcessorStatus;
+import org.apache.nifi.controller.status.RemoteProcessGroupStatus;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.remote.Transaction;
+import org.apache.nifi.remote.TransferDirection;
+
+@Tags({"status", "metrics", "history", "site", "site to site"})
+@CapabilityDescription("Publishes Status events using the Site To Site 
protocol.  "
++ "The component type and name filter regexes form a union: only 
components matching both regexes will be reported.  "
++ "However, all process groups are recursively searched for 
matching components, regardless of whether the process group matches the 
component filters.")
+public class SiteToSiteStatusReportingTask extends 
AbstractSiteToSiteReportingTask {
+
+static final String TIMESTAMP_FORMAT = "-MM-dd'T'HH:mm:ss.SSS'Z'";
+
+static final PropertyDescriptor PLATFORM = new 
PropertyDescriptor.Builder()
+.name("Platform")
+.description("The value to use for the platform field in each 
provenance event.")
+.required(true)
+.expressionLanguageSupported(true)
+.defaultValue("nifi")
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.build();
+static final PropertyDescriptor COMPONENT_TYPE_FILTER_REGEX = new 
PropertyDescriptor.Builder()
+.name("Component Type Filter Regex")
+.description("A regex specifying which component types to report.  
Any component type matching this regex will be included.  "
++ "Component types are: Processor, RootProcessGroup, 
ProcessGroup, RemoteProcessGroup, Connection, InputPort, OutputPort")
+.required(true)
+.expressionLanguageSupported(true)
--- End diff --

It looks the code below doesn't evaluate expression language when getting 
the value, so we should either set this to false, or evaluate below.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1639: NIFI-1939 - Correct issue where ParseSyslog was una...

2017-04-28 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1639#discussion_r113979345
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ParseSyslog.java
 ---
@@ -57,13 +57,17 @@
 @SupportsBatching
 @InputRequirement(Requirement.INPUT_REQUIRED)
 @Tags({"logs", "syslog", "attributes", "system", "event", "message"})
-@CapabilityDescription("Parses the contents of a Syslog message and adds 
attributes to the FlowFile for each of the parts of the Syslog message")
+@CapabilityDescription("Attempts to parses the contents of a Syslog 
message in accordance to RFC5424 and RFC3164 " +
+"formats and adds attributes to the FlowFile for each of the parts 
of the Syslog message." +
+"Note: Be mindfull that RFC3164 is informational and a wide range 
of different implementations are present in" +
+" the wild. If messages fail parsing, considering using RFC5424 or 
using a generic parsin processors such as " +
--- End diff --

Minor typo on 'parsin'


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1639: NIFI-1939 - Correct issue where ParseSyslog was una...

2017-04-28 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1639#discussion_r113979520
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ParseSyslog.java
 ---
@@ -57,13 +57,17 @@
 @SupportsBatching
 @InputRequirement(Requirement.INPUT_REQUIRED)
 @Tags({"logs", "syslog", "attributes", "system", "event", "message"})
-@CapabilityDescription("Parses the contents of a Syslog message and adds 
attributes to the FlowFile for each of the parts of the Syslog message")
+@CapabilityDescription("Attempts to parses the contents of a Syslog 
message in accordance to RFC5424 and RFC3164 " +
+"formats and adds attributes to the FlowFile for each of the parts 
of the Syslog message." +
+"Note: Be mindfull that RFC3164 is informational and a wide range 
of different implementations are present in" +
+" the wild. If messages fail parsing, considering using RFC5424 or 
using a generic parsin processors such as " +
+"ExtractGrok.")
 @WritesAttributes({@WritesAttribute(attribute = "syslog.priority", 
description = "The priority of the Syslog message."),
 @WritesAttribute(attribute = "syslog.severity", description = "The 
severity of the Syslog message derived from the priority."),
 @WritesAttribute(attribute = "syslog.facility", description = "The 
facility of the Syslog message derived from the priority."),
 @WritesAttribute(attribute = "syslog.version", description = "The 
optional version from the Syslog message."),
 @WritesAttribute(attribute = "syslog.timestamp", description = "The 
timestamp of the Syslog message."),
-@WritesAttribute(attribute = "syslog.hostname", description = "The 
hostname of the Syslog message."),
+@WritesAttribute(attribute = "syslog.hostname", description = "The 
hostname or IP address of the Syslog message."),
--- End diff --

Should we update this on ListenSyslog as well since the same parsing will 
be used there? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1660: NIFI-3674: Implementing SiteToSiteStatusReportingTask

2017-05-01 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1660
  
Since the above comments were so trivial, I went ahead and made the changes 
and will merge to master shortly, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1720: NIFI-3670 - Expose the control of ListenSyslog's CLIENT_AU...

2017-05-01 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1720
  
+1 Looks good, verified client auth can be controlled through the 
processor, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1712: NIFI-3724 - Add Put/Fetch Parquet Processors

2017-05-01 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1712#discussion_r114170838
  
--- Diff: 
nifi-nar-bundles/nifi-extension-utils/nifi-record-utils/nifi-hadoop-record-utils/src/main/java/org/apache/nifi/processors/hadoop/AbstractPutHDFSRecord.java
 ---
@@ -0,0 +1,505 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.hadoop;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.nifi.annotation.behavior.TriggerWhenEmpty;
+import org.apache.nifi.annotation.configuration.DefaultSettings;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.flowfile.attributes.CoreAttributes;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.ProcessorInitializationContext;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.FlowFileAccessException;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.processors.hadoop.exception.FailureException;
+import 
org.apache.nifi.processors.hadoop.exception.RecordReaderFactoryException;
+import org.apache.nifi.processors.hadoop.record.HDFSRecordWriter;
+import org.apache.nifi.schema.access.SchemaAccessStrategy;
+import org.apache.nifi.schema.access.SchemaAccessUtils;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.schemaregistry.services.SchemaRegistry;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.WriteResult;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.RecordSet;
+import org.apache.nifi.util.StopWatch;
+
+import java.io.BufferedInputStream;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.security.PrivilegedAction;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicReference;
+
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.HWX_CONTENT_ENCODED_SCHEMA;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.HWX_SCHEMA_REF_ATTRIBUTES;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_ACCESS_STRATEGY;
+import static org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_NAME;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_NAME_PROPERTY;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_REGISTRY;
+import static org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_TEXT;
+import static 
org.apache.nifi.schema.access.SchemaAccessUtils.SCHEMA_TEXT_PROPERTY;
+
+/**
+ * Base class for processors that write Records to HDFS.
+ */
+@TriggerWhenEmpty // trigger when empty so we have a chance to perform a 
Kerberos re-login
+@DefaultSettings(yieldDuration = "100 ms") // decrease the default yield

[GitHub] nifi issue #1712: NIFI-3724 - Add Put/Fetch Parquet Processors

2017-05-01 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1712
  
Thanks Andy, I just pushed a commit that addresses your comments, we should 
be good to go. I am going to look into the template issue, but I agree that it 
is not caused by the changes in this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1713: NIFI-3721 Added documentation for Encrypted Provenance Rep...

2017-05-01 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1713
  
+1 on this, looks good!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1730: NIFI-3764: Ensuring that UUIDs are not converted multiple ...

2017-05-02 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1730
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1730: NIFI-3764: Ensuring that UUIDs are not converted multiple ...

2017-05-02 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1730
  
+1 code looks good, tested this on a template that was previously importing 
incorrectly and verified it imports correctly after this change, will merge to 
master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1716: NIFI-3759 - avro append for PutHDFS processor

2017-05-03 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1716
  
@jonashartwig have you seen Joe's comment here? 

https://issues.apache.org/jira/browse/NIFI-3759?focusedCommentId=15988716&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15988716


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1739: NIFI-3776 Correcting links in documentation for See...

2017-05-03 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1739

NIFI-3776 Correcting links in documentation for SeeAlso and propertie…

…s referencing controller services

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-3776

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1739.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1739






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1740: NIFI-3782: Controller Service duplicated during Copy/Paste

2017-05-03 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1740
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1740: NIFI-3782: Controller Service duplicated during Copy/Paste

2017-05-03 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1740
  
+1 verified copy & paste behavior is working as expected after this patch, 
will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1742: NIFI-3787: Addressed NPE and ensure that if validation fai...

2017-05-03 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1742
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1742: NIFI-3787: Addressed NPE and ensure that if validation fai...

2017-05-03 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1742
  
+1 Looks good and will merge to master, thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1743: NIFI-3789 Removing unnecessary intermittent test fa...

2017-05-03 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1743

NIFI-3789 Removing unnecessary intermittent test failure as described…

… in JIRA

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-3789

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1743.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1743


commit 14f61adf35bb4db0805e6a6406a5bc1bc0536f7f
Author: Bryan Bende 
Date:   2017-05-03T18:02:37Z

NIFI-3789 Removing unnecessary intermittent test failure as described in 
JIRA




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1748: NIFI-3793 - switch to use project.version

2017-05-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1748
  
Thanks Yolanda, will review...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1746: NIFI-3794 - Expose the control of ListenRELP's CLIENT_AUTH...

2017-05-04 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1746
  
+1 Looks good, since this is the same change we just made to ListenSyslog I 
will go ahead and merge it to master for the 1.2 release


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1761: NIFI-3818: PutHiveStreaming throws IllegalStateException

2017-05-05 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1761
  
I'm a +1 on these changes, seems like this might be good to include for 
1.2.0 now that we are going make an RC2


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1764: NIFI-3820 added calcite to assembly notice and updated all...

2017-05-05 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1764
  
+1 Looks good, thanks Joe, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1766: NIFI-3826 added proper NOTICE entries for apache calcite a...

2017-05-05 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1766
  
+1 Looks good, merging to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1780: NIFI-3865: Fix issues viewing content when clustered

2017-05-10 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1780
  
Will review...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1780: NIFI-3865: Fix issues viewing content when clustered

2017-05-10 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1780
  
+1 Verified content shows properly in the content viewer after this change, 
will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1808: NIFI-3904 Adding logic to only reload when incoming...

2017-05-16 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1808

NIFI-3904 Adding logic to only reload when incoming bundle is differe…

…nt and to use additional URLs from spec component

NIFI-3908 Changing UI to submit filterType for CSs and filter for 
processors and reporting tasks


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-3904

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1808.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1808


commit 3ac6a887c9d7c5473fd94b0aa12ade756496cbbd
Author: Bryan Bende 
Date:   2017-05-16T17:52:10Z

NIFI-3904 Adding logic to only reload when incoming bundle is different and 
to use additional URLs from spec component
NIFI-3908 Changing UI to submit filterType for CSs and filter for 
processors and reporting tasks




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1808: NIFI-3904 Adding logic to only reload when incoming...

2017-05-16 Thread bbende
Github user bbende closed the pull request at:

https://github.com/apache/nifi/pull/1808


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1822: NIFI-3853 fixing tests to properly encapsulate their envir...

2017-05-18 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1822
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1822: NIFI-3853 fixing tests to properly encapsulate their envir...

2017-05-18 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1822
  
+1 looks good, full build passes with contrib-check, will merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1829: NIFI-3945 Adding documentaion about security protoc...

2017-05-19 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1829

NIFI-3945 Adding documentaion about security protocols to Kafka 0.10 …

…processors

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi kafka-security-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1829.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1829


commit 0cd6dae049873d4a3bdd058aef0758653912c885
Author: Bryan Bende 
Date:   2017-05-19T13:37:34Z

NIFI-3945 Adding documentaion about security protocols to Kafka 0.10 
processors




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1831: NIFI-3942 Making IPLookupService reload the databas...

2017-05-19 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1831

NIFI-3942 Making IPLookupService reload the database file on the fly …

…when detecting the file has changed

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-3942

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1831.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1831


commit 560d1000a9079a38ed0d0ad5e60aa023eaaa41a9
Author: Bryan Bende 
Date:   2017-05-19T19:06:48Z

NIFI-3942 Making IPLookupService reload the database file on the fly when 
detecting the file has changed




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1831: NIFI-3942 Making IPLookupService reload the database file ...

2017-05-19 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1831
  
@markap14 Thanks for reviewing, I agree about the handling of the 
InvalidDatabaseException. I added another commit that catches the 
InvalidDatabaseException, then forces a reload and attempts the look up one 
more time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1833: NIFI-3946: Update LookupService to take a Map instead of a...

2017-05-22 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1833
  
+1 full build passed, successfully ran an IP lookup flow, will merge to 
master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1835: NIFI-3951: Fixed bug that calculated the index incorrectly...

2017-05-22 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1835
  
+1 verified I no longer receive the exception with this change, will merge 
to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1836: NIFI-3952: Updated UpdateRecord to pass field-related vari...

2017-05-22 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1836
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1836: NIFI-3952: Updated UpdateRecord to pass field-related vari...

2017-05-22 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1836
  
+1 looks good, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1839: NIFI-3949: Updated Grok Reader to allow for sub-patterns t...

2017-05-22 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1839
  
+1 Looks good, was able to successfully run the expression that previously 
caused problems


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1842: NIFI-3732 Adding connect with timeout to StandardCo...

2017-05-22 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1842

NIFI-3732 Adding connect with timeout to StandardCommsSession and SSL…

…CommsSession to avoid blocking

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-3732

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1842.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1842


commit 7da9ca9a433c5c01a478159993cbad48e4a396e0
Author: Bryan Bende 
Date:   2017-05-23T00:51:04Z

NIFI-3732 Adding connect with timeout to StandardCommsSession and 
SSLCommsSession to avoid blocking




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1802: NIFI-3191 - HDFS Processors Should Allow Choosing LZO Comp...

2017-05-24 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1802
  
+1 Was able to successfully use Twitter's LZO codec with the HDFS 
processors with this change, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1645: NIFI-3644 - Added HBase_1_1_2_ClientMapCacheService

2017-05-24 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1645
  
Sorry for taking so long to get back to this...

I tested this using PutDistributedMapCache and FetchDistributedMapCache, 
and noticed the value coming back from fetch wasn't exactly what I had stored. 

In HBaseRowHandler we had:
`lastResultBytes = resultCell.getValueArray()`

And we need:
`lastResultBytes = Arrays.copyOfRange(resultCell.getValueArray(), 
resultCell.getValueOffset(), resultCell.getValueLength() + 
resultCell.getValueOffset());
`

I made a commit here that includes the change:

https://github.com/bbende/nifi/commit/dc8f14d95d6cdbab2aa6e815269fe0d98faa2fe6

I also moved MockHBaseClientService into it's own class so it can be used 
by both tests, so that we don't have to duplicate that code.

Everything else looks good so I will go ahead and merge these changes 
together (your commit then mine). 

Thanks again for contributing! and sorry for the delay.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1853: NIFI-3963: RPG Yield Issue

2017-05-24 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1853
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1853: NIFI-3963: RPG Yield Issue

2017-05-24 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1853
  
+1 Looks good, repeated the steps outline by Joe P and can see that we are 
now yielding correctly, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1859: NIFI-3979 Correcting ListHDFS so it will list all f...

2017-05-25 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1859

NIFI-3979 Correcting ListHDFS so it will list all files appropriately

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi list-hdfs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1859


commit 5bb74c2c36eaf2d414ac16614d000cab98218547
Author: Bryan Bende 
Date:   2017-05-25T17:31:43Z

NIFI-3979 Correcting ListHDFS so it will list all files appropriately




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1859: NIFI-3979 Correcting ListHDFS so it will list all f...

2017-05-25 Thread bbende
Github user bbende closed the pull request at:

https://github.com/apache/nifi/pull/1859


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1860: NIFI-3979 Documenting how ListHDFS maintains state ...

2017-05-25 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1860

NIFI-3979 Documenting how ListHDFS maintains state and performs listings

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-3979

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1860.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1860


commit a8b5c8c00e5704a4febe0a7ba3c604b6289f5636
Author: Bryan Bende 
Date:   2017-05-25T18:43:21Z

NIFI-3979 Documenting how ListHDFS maintains state and performs listings




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1861: NIFI-3981: When serializing flow to cluster, use the Sched...

2017-05-25 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1861
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1861: NIFI-3981: When serializing flow to cluster, use the Sched...

2017-05-25 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1861
  
+1 Recreated the scenario before this change, then verified this change 
resolves the issue, will merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1866: NIFI-3984 - upgraded version to 0.2.1

2017-05-26 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1866
  
+1 looks good, verified the 0.2.1 client is available in Maven central and 
brought in through the build, will merge to master, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1876: NIFI-3995: Updated Hwx Encoded Schema Ref Writer to write ...

2017-06-01 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1876
  
+1 Ran this against the latest HWX schema registry and verified reading and 
writing records using the HWX Content Encoded Schema Reference, will merge to 
master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1879: NIFI-4003: Expose configuration option for cache size and ...

2017-06-01 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1879
  
Reviewing... there are two places in HortonworksSchemaRegistry where 
client.getSchemaVersionInfo is called without attempting to use the cache 
stored in the controller service and ends up failing if the schema registry is 
down, even though the SchemaVersionInfo should be in the local cache:

```
ExecutionException: javax.ws.rs.ProcessingException: 
java.net.ConnectException: Connection refused
java.lang.RuntimeException: 
com.google.common.util.concurrent.UncheckedExecutionException: 
javax.ws.rs.ProcessingException: java.net.ConnectException: Connection refused
at 
com.hortonworks.registries.schemaregistry.client.SchemaRegistryClient.getSchemaVersionInfo(SchemaRegistryClient.java:470)
at 
org.apache.nifi.schemaregistry.hortonworks.HortonworksSchemaRegistry.retrieveSchema(HortonworksSchemaRegistry.java:271)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:89)
at com.sun.proxy.$Proxy76.retrieveSchema(Unknown Source)
at 
org.apache.nifi.schema.access.HortonworksEncodedSchemaReferenceStrategy.getSchema(HortonworksEncodedSchemaReferenceStrategy.java:69)
at 
org.apache.nifi.serialization.SchemaRegistryService.getSchema(SchemaRegistryService.java:112)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1879: NIFI-4003: Expose configuration option for cache size and ...

2017-06-01 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1879
  
Thanks! I'm a +1 now, will merge to master


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1878: NIFI-4002: Add PutElasticsearchHttpRecord processor

2017-06-01 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1878#discussion_r119678825
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
 ---
@@ -0,0 +1,559 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+import okhttp3.ResponseBody;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.AttributeExpression;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+import org.apache.nifi.serialization.record.type.ChoiceDataType;
+import org.apache.nifi.serialization.record.type.MapDataType;
+import org.apache.nifi.serialization.record.type.RecordDataType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StringUtils;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonGenerator;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.node.ArrayNode;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.math.BigInteger;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.commons.lang3.StringUtils.trimToEmpty;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"elasticsearch", "insert", "update", "upsert", "delete", "write", 
"put", "http", "record"})
+@CapabilityDescription("Writes the contents of a FlowFile to 
Elasticsearch, using the specified parameters such as "
++ "the index to insert into and the type of the document.")
+public class PutElasticsearchHttpRecord extends 
AbstractElasticsearchHttpProcessor {
+
+public static final Re

[GitHub] nifi pull request #1878: NIFI-4002: Add PutElasticsearchHttpRecord processor

2017-06-01 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1878#discussion_r119714959
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
 ---
@@ -0,0 +1,559 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+import okhttp3.ResponseBody;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.AttributeExpression;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+import org.apache.nifi.serialization.record.type.ChoiceDataType;
+import org.apache.nifi.serialization.record.type.MapDataType;
+import org.apache.nifi.serialization.record.type.RecordDataType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StringUtils;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonGenerator;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.node.ArrayNode;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.math.BigInteger;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.commons.lang3.StringUtils.trimToEmpty;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"elasticsearch", "insert", "update", "upsert", "delete", "write", 
"put", "http", "record"})
+@CapabilityDescription("Writes the contents of a FlowFile to 
Elasticsearch, using the specified parameters such as "
++ "the index to insert into and the type of the document.")
+public class PutElasticsearchHttpRecord extends 
AbstractElasticsearchHttpProcessor {
+
+public static final Re

[GitHub] nifi pull request #1878: NIFI-4002: Add PutElasticsearchHttpRecord processor

2017-06-01 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1878#discussion_r119679014
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
 ---
@@ -0,0 +1,559 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+import okhttp3.ResponseBody;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.AttributeExpression;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+import org.apache.nifi.serialization.record.type.ChoiceDataType;
+import org.apache.nifi.serialization.record.type.MapDataType;
+import org.apache.nifi.serialization.record.type.RecordDataType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StringUtils;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonGenerator;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.node.ArrayNode;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.math.BigInteger;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.commons.lang3.StringUtils.trimToEmpty;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"elasticsearch", "insert", "update", "upsert", "delete", "write", 
"put", "http", "record"})
+@CapabilityDescription("Writes the contents of a FlowFile to 
Elasticsearch, using the specified parameters such as "
++ "the index to insert into and the type of the document.")
+public class PutElasticsearchHttpRecord extends 
AbstractElasticsearchHttpProcessor {
+
+public static final Re

[GitHub] nifi pull request #1878: NIFI-4002: Add PutElasticsearchHttpRecord processor

2017-06-01 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1878#discussion_r119713799
  
--- Diff: 
nifi-nar-bundles/nifi-elasticsearch-bundle/nifi-elasticsearch-processors/src/main/java/org/apache/nifi/processors/elasticsearch/PutElasticsearchHttpRecord.java
 ---
@@ -0,0 +1,559 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.processors.elasticsearch;
+
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+import okhttp3.ResponseBody;
+import org.apache.nifi.annotation.behavior.EventDriven;
+import org.apache.nifi.annotation.behavior.InputRequirement;
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnScheduled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.expression.AttributeExpression;
+import org.apache.nifi.flowfile.FlowFile;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.ProcessContext;
+import org.apache.nifi.processor.ProcessSession;
+import org.apache.nifi.processor.Relationship;
+import org.apache.nifi.processor.exception.ProcessException;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.record.path.FieldValue;
+import org.apache.nifi.record.path.RecordPath;
+import org.apache.nifi.record.path.util.RecordPathCache;
+import org.apache.nifi.record.path.validation.RecordPathValidator;
+import org.apache.nifi.schema.access.SchemaNotFoundException;
+import org.apache.nifi.serialization.MalformedRecordException;
+import org.apache.nifi.serialization.RecordReader;
+import org.apache.nifi.serialization.RecordReaderFactory;
+import org.apache.nifi.serialization.record.DataType;
+import org.apache.nifi.serialization.record.Record;
+import org.apache.nifi.serialization.record.RecordField;
+import org.apache.nifi.serialization.record.RecordFieldType;
+import org.apache.nifi.serialization.record.RecordSchema;
+import org.apache.nifi.serialization.record.type.ArrayDataType;
+import org.apache.nifi.serialization.record.type.ChoiceDataType;
+import org.apache.nifi.serialization.record.type.MapDataType;
+import org.apache.nifi.serialization.record.type.RecordDataType;
+import org.apache.nifi.serialization.record.util.DataTypeUtils;
+import org.apache.nifi.util.StringUtils;
+import org.codehaus.jackson.JsonFactory;
+import org.codehaus.jackson.JsonGenerator;
+import org.codehaus.jackson.JsonNode;
+import org.codehaus.jackson.node.ArrayNode;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.math.BigInteger;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+
+import static org.apache.commons.lang3.StringUtils.trimToEmpty;
+
+
+@InputRequirement(InputRequirement.Requirement.INPUT_REQUIRED)
+@EventDriven
+@Tags({"elasticsearch", "insert", "update", "upsert", "delete", "write", 
"put", "http", "record"})
+@CapabilityDescription("Writes the contents of a FlowFile to 
Elasticsearch, using the specified parameters such as "
++ "the index to insert into and the type of the document.")
+public class PutElasticsearchHttpRecord extends 
AbstractElasticsearchHttpProcessor {
+
+public static final Re

[GitHub] nifi issue #1894: NIFI-4029: Allow null Avro default values in HortonworksSc...

2017-06-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1894
  
+1 Verified this fixes the issue, will merge to master, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1896: NIFI-4030 Populating default values on GenericRecor...

2017-06-07 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1896

NIFI-4030 Populating default values on GenericRecord from Avro schema…

… if not present in RecordSchema

Thank you for submitting a contribution to Apache NiFi.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with NIFI- where  is the JIRA number 
you are trying to resolve? Pay particular attention to the hyphen "-" character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
-Pcontrib-check clean install at the root nifi folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file under nifi-assembly?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found under nifi-assembly?
- [ ] If adding new Properties, have you added .displayName in addition to 
.name (programmatic access) for each of the new properties?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4030

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1896.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1896


commit 1b9f5d8d40d211e8ee2ecfeca64e9e6eccc24ba8
Author: Bryan Bende 
Date:   2017-06-07T17:00:13Z

NIFI-4030 Populating default values on GenericRecord from Avro schema if 
not present in RecordSchema




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1897: NIFI-3653: Introduce ManagedAuthorizer

2017-06-07 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1897
  
This looks cool, will review...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1901: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-08 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1901

NIFI-4043 Initial commit of nifi-redis-bundle

Adds RedisConnectionPoolService and RedisDistributedMapCacheClientService

- Refactored Wait/Notify to use new replace method based on previous value, 
depracating replace based on revision
- Updating protocol version on DMC and require v3 for new replace
- Wait/Notify working against Redis and standard DMCS
- Working connection to sentinel configuration
- Adding validtion to ensure Redis DMCS doesn't use a connection pool in 
clustered mode
- Adding NOTICE and LICENSE to nifi-redis-service-api-nar


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi nifi-redis-bundle

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1901.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1901


commit 3d9fa5fd186d7930b9cb30a9cb49df9b5e07f5fa
Author: Bryan Bende 
Date:   2017-05-25T16:35:23Z

NIFI-4043 Initial commit of nifi-redis-bundle with 
RedisConnectionPoolService and RedisDistributedMapCacheClientService
- Refactored Wait/Notify to use new replace method based on previous value, 
depracating replace based on revision
- Updating protocol version on DMC and require v3 for new replace
- Wait/Notify working against Redis and standard DMCS
- Working connection to sentinel configuration
- Adding validtion to ensure Redis DMCS doesn't use a connection pool in 
clustered mode
- Adding NOTICE and LICENSE to nifi-redis-service-api-nar




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1901: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-08 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1901#discussion_r121037027
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-distributed-cache-client-service-api/src/main/java/org/apache/nifi/distributed/cache/client/AtomicDistributedMapCacheClient.java
 ---
@@ -71,6 +73,23 @@
  * @return true only if the key is replaced.
  * @throws IOException if unable to communicate with the remote 
instance
  */
+@Deprecated
  boolean replace(K key, V value, Serializer keySerializer, 
Serializer valueSerializer, long revision) throws IOException;
--- End diff --

@ijokarumawak I think using generics for the revision should work, I had 
actually considered this idea and not really sure why I didn't go this route, 
but I am open to giving it a try.

We would also need to pass in a Serializer revisionSerializer to the 
replace method right?

So the process would be... fetch the existing CacheEntry, update the value 
(V), and pass it back in with the serializers.

I'll give it a try in the next couple of days, thanks for taking a look!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1901: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-08 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1901#discussion_r121037053
  
--- Diff: 
nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/WaitNotifyProtocol.java
 ---
@@ -144,6 +150,28 @@ public void setReleasableCount(int releasableCount) {
 
 }
 
+/**
+ * Class to pass around the signal with the JSON that was deserialized 
to create it.
+ */
+public static class SignalHolder {
--- End diff --

Agreed, we can definitely get right of the SignalHolder if you use the 
approach above.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1901: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-08 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1901#discussion_r121039023
  
--- Diff: 
nifi-nar-bundles/nifi-standard-services/nifi-distributed-cache-client-service-api/src/main/java/org/apache/nifi/distributed/cache/client/AtomicDistributedMapCacheClient.java
 ---
@@ -71,6 +73,23 @@
  * @return true only if the key is replaced.
  * @throws IOException if unable to communicate with the remote 
instance
  */
+@Deprecated
  boolean replace(K key, V value, Serializer keySerializer, 
Serializer valueSerializer, long revision) throws IOException;
--- End diff --

Thanks! That should make things easier for me :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1904: NIFI-4049: Refactor AtomicDistributedMapCacheClient

2017-06-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1904
  
@ijokarumawak from a quick glance this looks like it should work... I'm 
need to wrap up a couple of other things and then I will take a closer look and 
make sure I can implement the replace with Redis using this approach.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1897: NIFI-3653: Introduce ManagedAuthorizer

2017-06-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1897
  
Thanks Matt, I've been testing this for the past two days and so far 
everything seems to be working well. I will take a look at the latest commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1897: NIFI-3653: Introduce ManagedAuthorizer

2017-06-09 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1897
  
Built the latest changes cleanly and ran a secure cluster successfully with 
the new style of config. Based on that and the previous testing, I think we are 
good to merge!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1904: NIFI-4049: Refactor AtomicDistributedMapCacheClient

2017-06-12 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1904
  
+1 Tested this out and verified Wait/Notify still works as expected with 
these changes. 

I agree with the full refactoring... since we have component versioning now 
someone could also continue to run the old versions (1.3.0) of these services 
if they needed to.

I'll merge to master and start working on the updates to my Redis 
implementation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1901: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-12 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1901
  
Going to close this PR and put up a new one based on NIFI-4049.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1901: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-12 Thread bbende
Github user bbende closed the pull request at:

https://github.com/apache/nifi/pull/1901


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1911: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-12 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1911

NIFI-4043 Initial commit of nifi-redis-bundle

Adds RedisConnectionPoolService and RedisDistributedMapCacheClientService

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4043

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1911.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1911


commit 7df3278961ae1171068d176959be1b421a2a2290
Author: Bryan Bende 
Date:   2017-06-12T19:53:20Z

NIFI-4043 Initial commit of nifi-redis-bundle




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1911: NIFI-4043 Initial commit of nifi-redis-bundle

2017-06-12 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1911
  
@ijokarumawak fyi this is the new PR with 
RedisDistributedMapCacheClientService based on the AtomicDistributedMapCache 
refactor


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-15 Thread bbende
GitHub user bbende opened a pull request:

https://github.com/apache/nifi/pull/1918

NIFI-4061 Add a RedisStateProvider

This PR adds a RedisStateProvider that can be used as an alternative to the 
ZooKeeperStateProvider for clustered state.

This PR includes the Redis work from NIFI-4043, so this could be merged to 
include both NIFI-4043 and NIFI-4061, or the other one could be merged first 
and then this branch could be rebased.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bbende/nifi NIFI-4061

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/nifi/pull/1918.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1918


commit 3cfe414eabc82192797fbd5d39c6c27860dc3eaa
Author: Bryan Bende 
Date:   2017-06-12T19:53:20Z

NIFI-4043 Initial commit of nifi-redis-bundle

commit 1581cd1662bf6239465fe6fefa472903788a255e
Author: Bryan Bende 
Date:   2017-06-13T19:57:22Z

NIFI-4061 Initial version of RedisStateProvider
- Adding PropertyContext and updating existing contexts to extend it
- Added embedded Redis for unit testing
- Added wrapped StateProvider with NAR ClassLoader in 
StandardStateManagerProvider
- Updating state-management.xml with config for RedisStateProvider
- Renaming tests that use RedisServer to be IT tests so they don't run all 
the time




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122836075
  
--- Diff: 
nifi-api/src/main/java/org/apache/nifi/context/PropertyContext.java ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.context;
+
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.PropertyValue;
+
+/**
+ * A context for retrieving a PropertyValue from a PropertyDescriptor.
+ */
+public interface PropertyContext {
--- End diff --

Good call. I added getAllProperties since there a lot of the contexts 
already had a getProperties that returned Map and 
would conflict.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122836288
  
--- Diff: 
nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-resources/src/main/resources/conf/state-management.xml
 ---
@@ -63,4 +63,68 @@
 10 seconds
 Open
 
+
+

[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122836349
  
--- Diff: 
nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisConnectionPoolService.java
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.redis.service;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.context.PropertyContext;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.redis.RedisConnectionPool;
+import org.apache.nifi.redis.RedisType;
+import org.apache.nifi.redis.util.RedisUtils;
+import org.springframework.data.redis.connection.RedisConnection;
+import 
org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+
+@Tags({"redis", "cache"})
+@CapabilityDescription("A service that provides connections to Redis.")
+public class RedisConnectionPoolService extends AbstractControllerService 
implements RedisConnectionPool {
+
+private volatile PropertyContext context;
+private volatile RedisType redisType;
+private volatile JedisConnectionFactory connectionFactory;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return RedisUtils.REDIS_CONNECTION_PROPERTY_DESCRIPTORS;
+}
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(RedisUtils.validate(validationContext));
--- End diff --

Good call, made that change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122836427
  
--- Diff: 
nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java
 ---
@@ -0,0 +1,313 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.redis.service;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.AtomicCacheEntry;
+import 
org.apache.nifi.distributed.cache.client.AtomicDistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.redis.RedisConnectionPool;
+import org.apache.nifi.redis.RedisType;
+import org.apache.nifi.redis.util.RedisAction;
+import org.apache.nifi.util.Tuple;
+import org.springframework.data.redis.connection.RedisConnection;
+import org.springframework.data.redis.core.Cursor;
+import org.springframework.data.redis.core.ScanOptions;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+@Tags({ "redis", "distributed", "cache", "map" })
+@CapabilityDescription("An implementation of DistributedMapCacheClient 
that uses Redis as the backing cache. This service relies on " +
+"the WATCH, MULTI, and EXEC commands in Redis, which are not fully 
supported when Redis is clustered. As a result, this service " +
+"can only be used with a Redis Connection Pool that is configured 
for standalone or sentinel mode. Sentinel mode can be used to " +
+"provide high-availability configurations.")
+public class RedisDistributedMapCacheClientService extends 
AbstractControllerService implements AtomicDistributedMapCacheClient {
+
+public static final PropertyDescriptor REDIS_CONNECTION_POOL = new 
PropertyDescriptor.Builder()
+.name("redis-connection-pool")
+.displayName("Redis Connection Pool")
+.identifiesControllerService(RedisConnectionPool.class)
+.required(true)
+.build();
+
+static final List PROPERTY_DESCRIPTORS;
+static {
+final List props = new ArrayList<>();
+props.add(REDIS_CONNECTION_POOL);
+PROPERTY_DESCRIPTORS = Collections.unmodifiableList(props);
+}
+
+private volatile RedisConnectionPool redisConnectionPool;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return PROPERTY_DESCRIPTORS;
+}
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new ArrayList<>();
+
+final RedisConnectionPool redisConnectionPool = 
validationContext.getProperty(REDIS_CONNECTION_POOL).asControllerService(RedisConnectionPool.class);
+if (redisConnectionPool != null) {
+final RedisType redisType = redisConnectionPool.getRedisType();
+if (redisType != null && redisType == RedisType.CLUSTER) {
+results.add(new ValidationResult.Builder()
+ 

[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122836730
  
--- Diff: 
nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/service/RedisDistributedMapCacheClientService.java
 ---
@@ -0,0 +1,313 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.redis.service;
+
+import org.apache.nifi.annotation.documentation.CapabilityDescription;
+import org.apache.nifi.annotation.documentation.Tags;
+import org.apache.nifi.annotation.lifecycle.OnDisabled;
+import org.apache.nifi.annotation.lifecycle.OnEnabled;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.controller.AbstractControllerService;
+import org.apache.nifi.controller.ConfigurationContext;
+import org.apache.nifi.distributed.cache.client.AtomicCacheEntry;
+import 
org.apache.nifi.distributed.cache.client.AtomicDistributedMapCacheClient;
+import org.apache.nifi.distributed.cache.client.Deserializer;
+import org.apache.nifi.distributed.cache.client.Serializer;
+import org.apache.nifi.redis.RedisConnectionPool;
+import org.apache.nifi.redis.RedisType;
+import org.apache.nifi.redis.util.RedisAction;
+import org.apache.nifi.util.Tuple;
+import org.springframework.data.redis.connection.RedisConnection;
+import org.springframework.data.redis.core.Cursor;
+import org.springframework.data.redis.core.ScanOptions;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+@Tags({ "redis", "distributed", "cache", "map" })
+@CapabilityDescription("An implementation of DistributedMapCacheClient 
that uses Redis as the backing cache. This service relies on " +
+"the WATCH, MULTI, and EXEC commands in Redis, which are not fully 
supported when Redis is clustered. As a result, this service " +
+"can only be used with a Redis Connection Pool that is configured 
for standalone or sentinel mode. Sentinel mode can be used to " +
+"provide high-availability configurations.")
+public class RedisDistributedMapCacheClientService extends 
AbstractControllerService implements AtomicDistributedMapCacheClient {
+
+public static final PropertyDescriptor REDIS_CONNECTION_POOL = new 
PropertyDescriptor.Builder()
+.name("redis-connection-pool")
+.displayName("Redis Connection Pool")
+.identifiesControllerService(RedisConnectionPool.class)
+.required(true)
+.build();
+
+static final List PROPERTY_DESCRIPTORS;
+static {
+final List props = new ArrayList<>();
+props.add(REDIS_CONNECTION_POOL);
+PROPERTY_DESCRIPTORS = Collections.unmodifiableList(props);
+}
+
+private volatile RedisConnectionPool redisConnectionPool;
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return PROPERTY_DESCRIPTORS;
+}
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new ArrayList<>();
+
+final RedisConnectionPool redisConnectionPool = 
validationContext.getProperty(REDIS_CONNECTION_POOL).asControllerService(RedisConnectionPool.class);
+if (redisConnectionPool != null) {
+final RedisType redisType = redisConnectionPool.getRedisType();
+if (redisType != null && redisType == RedisType.CLUSTER) {
+results.add(new ValidationResult.Builder()
+ 

[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122836798
  
--- Diff: 
nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/state/RedisStateMapJsonSerDe.java
 ---
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.redis.state;
+
+import com.fasterxml.jackson.core.JsonFactory;
+import com.fasterxml.jackson.core.JsonGenerator;
+import com.fasterxml.jackson.core.JsonParser;
+import com.fasterxml.jackson.core.JsonToken;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * A RedisStateMapSerDe that uses JSON as the underlying representation.
+ */
+public class RedisStateMapJsonSerDe implements RedisStateMapSerDe {
+
+public static final String FIELD_VERSION = "version";
+public static final String FIELD_ENCODING = "encodingVersion";
+public static final String FIELD_STATE_VALUES = "stateValues";
+
+private final JsonFactory jsonFactory = new JsonFactory();
+
+@Override
+public byte[] serialize(final RedisStateMap stateMap) throws 
IOException {
+if (stateMap == null) {
+return null;
+}
+
+try (final ByteArrayOutputStream out = new 
ByteArrayOutputStream()) {
+final JsonGenerator jsonGenerator = 
jsonFactory.createGenerator(out);
+jsonGenerator.writeStartObject();
+jsonGenerator.writeNumberField(FIELD_VERSION, 
stateMap.getVersion());
+jsonGenerator.writeNumberField(FIELD_ENCODING, 
stateMap.getEncodingVersion());
+
+jsonGenerator.writeObjectFieldStart(FIELD_STATE_VALUES);
+for (Map.Entry entry : 
stateMap.toMap().entrySet()) {
+jsonGenerator.writeStringField(entry.getKey(), 
entry.getValue());
+}
+jsonGenerator.writeEndObject();
+
+jsonGenerator.writeEndObject();
+jsonGenerator.flush();
+
+return out.toByteArray();
+}
+}
+
+@Override
+public RedisStateMap deserialize(final byte[] data) throws IOException 
{
+if (data == null || data.length == 0) {
+return null;
+}
+
+final RedisStateMap.Builder builder = new RedisStateMap.Builder();
+
+try (final JsonParser jsonParser = jsonFactory.createParser(data)) 
{
+while (jsonParser.nextToken() != JsonToken.END_OBJECT) {
--- End diff --

I knew that code should have been simpler :) updated it to use the 
readValueAsTree


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122837047
  
--- Diff: 
nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/util/RedisUtils.java
 ---
@@ -0,0 +1,422 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.redis.util;
+
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.context.PropertyContext;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.redis.RedisType;
+import org.apache.nifi.util.StringUtils;
+import org.springframework.data.redis.connection.RedisClusterConfiguration;
+import 
org.springframework.data.redis.connection.RedisSentinelConfiguration;
+import 
org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
+import redis.clients.jedis.JedisPoolConfig;
+import redis.clients.jedis.JedisShardInfo;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+public class RedisUtils {
+
+// These properties are shared between the connection pool controller 
service and the state provider, the name
+// is purposely set to be more human-readable since that will be 
referenced in state-management.xml
+
+public static final AllowableValue REDIS_MODE_STANDALONE = new 
AllowableValue(RedisType.STANDALONE.getDisplayName(), 
RedisType.STANDALONE.getDisplayName(), RedisType.STANDALONE.getDescription());
+public static final AllowableValue REDIS_MODE_SENTINEL = new 
AllowableValue(RedisType.SENTINEL.getDisplayName(), 
RedisType.SENTINEL.getDisplayName(), RedisType.SENTINEL.getDescription());
+public static final AllowableValue REDIS_MODE_CLUSTER = new 
AllowableValue(RedisType.CLUSTER.getDisplayName(), 
RedisType.CLUSTER.getDisplayName(), RedisType.CLUSTER.getDescription());
+
+public static final PropertyDescriptor REDIS_MODE = new 
PropertyDescriptor.Builder()
+.name("Redis Mode")
+.displayName("Redis Mode")
+.description("The type of Redis being communicated with - 
standalone, sentinel, or clustered.")
+.allowableValues(REDIS_MODE_STANDALONE, REDIS_MODE_SENTINEL, 
REDIS_MODE_CLUSTER)
+.defaultValue(REDIS_MODE_STANDALONE.getValue())
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+
+public static final PropertyDescriptor CONNECTION_STRING = new 
PropertyDescriptor.Builder()
+.name("Connection String")
+.displayName("Connection String")
+.description("The connection string for Redis. In a standalone 
instance this value will be of the form hostname:port. " +
+"In a sentinel instance this value will be the 
comma-separated list of sentinels, such as host1:port1,host2:port2,host3:port3. 
" +
+"In a clustered instance this value will be the 
comma-separated list of cluster masters, such as 
host1:port,host2:port,host3:port.")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor DATABASE = new 
PropertyDescriptor.Builder()
+.name("Database Index")
+.displayName("Database Index")
+.description("The database index to be used by connections 
cre

[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122837003
  
--- Diff: 
nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/state/RedisStateProvider.java
 ---
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.redis.state;
+
+import org.apache.nifi.components.AbstractConfigurableComponent;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.components.state.Scope;
+import org.apache.nifi.components.state.StateMap;
+import org.apache.nifi.components.state.StateProvider;
+import org.apache.nifi.components.state.StateProviderInitializationContext;
+import org.apache.nifi.context.PropertyContext;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.redis.RedisType;
+import org.apache.nifi.redis.util.RedisAction;
+import org.apache.nifi.redis.util.RedisUtils;
+import org.springframework.data.redis.connection.RedisConnection;
+import 
org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * A StateProvider backed by Redis.
+ */
+public class RedisStateProvider extends AbstractConfigurableComponent 
implements StateProvider {
+
+static final int ENCODING_VERSION = 1;
+
+private String identifier;
+private PropertyContext context;
+private ComponentLog logger;
+
+private volatile boolean enabled;
+private volatile JedisConnectionFactory connectionFactory;
+
+private final RedisStateMapSerDe serDe = new RedisStateMapJsonSerDe();
+
+@Override
+public final void initialize(final StateProviderInitializationContext 
context) throws IOException {
+this.context = context;
+this.identifier = context.getIdentifier();
+this.logger = context.getLogger();
+}
+
+@Override
+protected List getSupportedPropertyDescriptors() {
+return RedisUtils.REDIS_CONNECTION_PROPERTY_DESCRIPTORS;
+}
+
+@Override
+protected Collection 
customValidate(ValidationContext validationContext) {
+final List results = new 
ArrayList<>(RedisUtils.validate(validationContext));
+
+final RedisType redisType = 
RedisType.fromDisplayName(validationContext.getProperty(RedisUtils.REDIS_MODE).getValue());
+if (redisType != null && redisType == RedisType.CLUSTER) {
+results.add(new ValidationResult.Builder()
+.subject(RedisUtils.REDIS_MODE.getDisplayName())
+.valid(false)
+.explanation(RedisUtils.REDIS_MODE.getDisplayName()
++ " is configured in clustered mode, and this 
service requires a non-clustered Redis")
+.build());
+}
+
+return results;
+}
+
+@Override
+public String getIdentifier() {
+return identifier;
+}
+
+@Override
+public void enable() {
+enabled = true;
+}
+
+@Override
+public void disable() {
+enabled = false;
+}
+
+@Override
+public boolean isEnabled() {
+return enabled;
+}
+
+@Override
+public void shutdown() {
+if (connectionFactory != null) {
+connectionFactory.destroy();
+connectionFactory = null;
+}
+}
+
+@Override
+public void setState(final Map state, f

[GitHub] nifi pull request #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on a diff in the pull request:

https://github.com/apache/nifi/pull/1918#discussion_r122837202
  
--- Diff: 
nifi-nar-bundles/nifi-redis-bundle/nifi-redis-extensions/src/main/java/org/apache/nifi/redis/util/RedisUtils.java
 ---
@@ -0,0 +1,422 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.nifi.redis.util;
+
+import org.apache.nifi.components.AllowableValue;
+import org.apache.nifi.components.PropertyDescriptor;
+import org.apache.nifi.components.ValidationContext;
+import org.apache.nifi.components.ValidationResult;
+import org.apache.nifi.context.PropertyContext;
+import org.apache.nifi.logging.ComponentLog;
+import org.apache.nifi.processor.util.StandardValidators;
+import org.apache.nifi.redis.RedisType;
+import org.apache.nifi.util.StringUtils;
+import org.springframework.data.redis.connection.RedisClusterConfiguration;
+import 
org.springframework.data.redis.connection.RedisSentinelConfiguration;
+import 
org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
+import redis.clients.jedis.JedisPoolConfig;
+import redis.clients.jedis.JedisShardInfo;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+public class RedisUtils {
+
+// These properties are shared between the connection pool controller 
service and the state provider, the name
+// is purposely set to be more human-readable since that will be 
referenced in state-management.xml
+
+public static final AllowableValue REDIS_MODE_STANDALONE = new 
AllowableValue(RedisType.STANDALONE.getDisplayName(), 
RedisType.STANDALONE.getDisplayName(), RedisType.STANDALONE.getDescription());
+public static final AllowableValue REDIS_MODE_SENTINEL = new 
AllowableValue(RedisType.SENTINEL.getDisplayName(), 
RedisType.SENTINEL.getDisplayName(), RedisType.SENTINEL.getDescription());
+public static final AllowableValue REDIS_MODE_CLUSTER = new 
AllowableValue(RedisType.CLUSTER.getDisplayName(), 
RedisType.CLUSTER.getDisplayName(), RedisType.CLUSTER.getDescription());
+
+public static final PropertyDescriptor REDIS_MODE = new 
PropertyDescriptor.Builder()
+.name("Redis Mode")
+.displayName("Redis Mode")
+.description("The type of Redis being communicated with - 
standalone, sentinel, or clustered.")
+.allowableValues(REDIS_MODE_STANDALONE, REDIS_MODE_SENTINEL, 
REDIS_MODE_CLUSTER)
+.defaultValue(REDIS_MODE_STANDALONE.getValue())
+.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
+.required(true)
+.build();
+
+public static final PropertyDescriptor CONNECTION_STRING = new 
PropertyDescriptor.Builder()
+.name("Connection String")
+.displayName("Connection String")
+.description("The connection string for Redis. In a standalone 
instance this value will be of the form hostname:port. " +
+"In a sentinel instance this value will be the 
comma-separated list of sentinels, such as host1:port1,host2:port2,host3:port3. 
" +
+"In a clustered instance this value will be the 
comma-separated list of cluster masters, such as 
host1:port,host2:port,host3:port.")
+.required(true)
+.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
+.expressionLanguageSupported(true)
+.build();
+
+public static final PropertyDescriptor DATABASE = new 
PropertyDescriptor.Builder()
+.name("Database Index")
+.displayName("Database Index")
+.description("The database index to be used by connections 
cre

[GitHub] nifi issue #1918: NIFI-4061 Add a RedisStateProvider

2017-06-19 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1918
  
@markap14 thanks for reviewing! I pushed up a commit that should address 
your initial comments and I tried to write a reply to each one so you have an 
idea of what I did to address it. Let me know if there is anything else.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] nifi issue #1150: NIFI-2925: When swapping in FlowFiles, do not assume that ...

2016-10-24 Thread bbende
Github user bbende commented on the issue:

https://github.com/apache/nifi/pull/1150
  
Reviewing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


<    1   2   3   4   5   6   7   8   9   10   >