Re: Problems with frequent Nifi core dumping

2016-11-16 Thread Joe Witt
Hello Cheryl

Nope - not something that I've seen.  But could you try doing what it
shows in that log

"Failed to write core dump. Core dumps have been disabled. To enable
core dumping, try "ulimit -c unlimited" before starting Java again"

There *might* be some useful data in the core dump itself.

Thanks
Joe

On Wed, Nov 16, 2016 at 10:12 PM, Cheryl Jennings  wrote:
> Hi Everyone,
>
> I'm running Nifi 0.7.0 and I have started to see problems where nifi will
> crash repeatedly and often won't be stable until I've restarted the machine.
> Here's a relevant part from nifi-bootstrap.log:
>
> 2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> A fatal error has been detected by the Java Runtime Environment:
> 2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> 2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> SIGSEGV (0xb) at pc=0x0033ada09470, pid=27979, tid=140067910633216
> 2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> 2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> JRE version: Java(TM) SE Runtime Environment (8.0_74-b02) (build
> 1.8.0_74-b02)
> 2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> Java VM: Java HotSpot(TM) 64-Bit Server VM (25.74-b02 mixed mode linux-amd64
> compressed oops)
> 2016-11-15 14:55:54,273 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> Problematic frame:
> 2016-11-15 14:55:54,273 INFO [NiFi logging handler] org.apache.nifi.StdOut #
> C  [libpthread.so.0+0x9470]  pthread_mutex_lock+0x0
>
>
> A bit more detail can be found here: http://paste.ubuntu.com/23488485/
>
> Has anyone seen something like this before?
>
> Thanks!
> -Cheryl


Problems with frequent Nifi core dumping

2016-11-16 Thread Cheryl Jennings
Hi Everyone,

I'm running Nifi 0.7.0 and I have started to see problems where nifi will
crash repeatedly and often won't be stable until I've restarted the
machine.  Here's a relevant part from nifi-bootstrap.log:

2016-11-15 14:55:54,272 INFO [NiFi logging handler]
org.apache.nifi.StdOut # A fatal error has been detected by the Java
Runtime Environment:
2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
2016-11-15 14:55:54,272 INFO [NiFi logging handler]
org.apache.nifi.StdOut #  SIGSEGV (0xb) at pc=0x0033ada09470,
pid=27979, tid=140067910633216
2016-11-15 14:55:54,272 INFO [NiFi logging handler] org.apache.nifi.StdOut #
2016-11-15 14:55:54,272 INFO [NiFi logging handler]
org.apache.nifi.StdOut # JRE version: Java(TM) SE Runtime Environment
(8.0_74-b02) (build 1.8.0_74-b02)
2016-11-15 14:55:54,272 INFO [NiFi logging handler]
org.apache.nifi.StdOut # Java VM: Java HotSpot(TM) 64-Bit Server VM
(25.74-b02 mixed mode linux-amd64 compressed oops)
2016-11-15 14:55:54,273 INFO [NiFi logging handler]
org.apache.nifi.StdOut # Problematic frame:
2016-11-15 14:55:54,273 INFO [NiFi logging handler]
org.apache.nifi.StdOut # C  [libpthread.so.0+0x9470]
pthread_mutex_lock+0x0


A bit more detail can be found here: http://paste.ubuntu.com/23488485/

Has anyone seen something like this before?

Thanks!
-Cheryl


Re: data flow from one s3 bucket to another

2016-11-16 Thread Koji Kawamura
Hello Gop,

Have you already found how to move data around S3 buckets? I hope you do.
But just in case if you haven't yet, I wrote a simple NiFi flow and
shared it in Gist:
https://gist.github.com/ijokarumawak/26ff675039e252d177b1195f3576cf9a

I misconfigured region and got an error once, but after I setup bucket
name, region and credential correctly, it worked as expected.
I'd recommend to test a S3 related flow using ListS3 processor, to see
if the credential can properly access the target S3 bucket.

Thanks,
Koji

On Fri, Oct 28, 2016 at 7:35 AM, Gop Krr  wrote:
> Has anyone implemented data copy from one s3 bucket to another. i would
> greatly appreciate if you can share with me your sample processors
> configuration.
> Thanks
> Rai


Re: NiFi versions and remote process groups

2016-11-16 Thread Koji Kawamura
Hello Russell,

You should be able to connect NiFi 1.0.0 to NiFi 0.7.1 via RemoteProcessGroup.
Please check this Gist, I confirmed that it works:
https://gist.github.com/ijokarumawak/37e428d5be8ce8031220f87c5ee9601c

Did you wait for a while and refresh the canvas and RemoteProcessGroup
on NiFI 1.0? Sometimes it takes a while for the RPG to recognize
remote ports availability due to its async nature.

Thanks,
Koji

On Thu, Nov 17, 2016 at 6:56 AM, Russell Bateman
 wrote:
> Should I be able to wire a remote process group on a NiFi 1.0.0 canvas to an
> input port on a 0.7.1 canvas? (Neither is a cluster.)
>
> I have an input port in 0.7.1, but when I click the "circle-arrow" icon on
> my NiFi 1.0.0 remote process group and drag it, I get
>
> 'NiFi Flow' does not have any output ports (true, it doesn't)
> 'NiFi Flow' does not have any input ports (false, there is one).
>
> When I try to connect the output of a processor in my NiFi 1.0.0 instance, I
> also get
> 'NiFi Flow' does not have any input ports.
>
> I presume 'NiFi Flow' refers to my running NiFi 0.7.1 instance (since that's
> its name in the title bar of the browser).
>
> I've examined:
>
> https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Remote_Group_Transmission
> https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#site-to-site
> https://community.hortonworks.com/articles/16461/nifi-understanding-how-to-use-process-groups-and-r.html
>
> Thanks.


Re: query a blob field on db2

2016-11-16 Thread Kaggal, Vinod C.
This has been resolved, thanks to this post: 
https://community.hortonworks.com/questions/66883/nifi-cannot-handle-db2-blob-data-sends-wrong-resul.html


On 2016-11-16 11:56 (-0600), "Kaggal, Vinod C."  wrote:
> Hello!>
>
> We have been using nifi successfully for a few things and can attest 
that this is a great tool to have in our tool box!>

>
> Recently, we were trying to query a blob field from db2 which 
resulted in an exception.>

>
> Any recommendation on how to address this would be appreciated!>
>
> Query:>
> SELECT BLOBTBL.BLOB_CONTENTS FROM CE_BLOB_DECOMP BLOBTBL fetch first 
10 rows only with UR>

>
>
> Exception:>
>
> ExecuteSQL[id=93bdd730-65ba-4deb-b6a4-78a551916d1d] Unable to execute 
SQL select query SELECT BLOBTBL.BLOB_CONTENTS FROM CE_BLOB_DECOMP 
BLOBTBL fetch first 10 rows only with UR due to 
org.apache.nifi.processor.exception.ProcessException: 
com.ibm.db2.jcc.am.jo: [jcc][t4][1092][11638][3.57.110] Invalid data 
conversion: Wrong result column type for requested conversion. 
ERRORCODE=-4461, SQLSTATE=42815; routing to failure: 
org.apache.nifi.processor.exception.ProcessException: 
com.ibm.db2.jcc.am.jo: [jcc][t4][1092][11638][3.57.110] Invalid data 
conversion: Wrong result column type for requested conversion. 
ERRORCODE=-4461, SQLSTATE=42815>

>
>
> Thank you!>
> Vinod>
>
>


Re: REST provenance/lineage API

2016-11-16 Thread Matt Gilman
Phil,

The lineage endpoints follow the same asynchronous model that the
provenance endpoints do. Here is an example curl command to initiate a
lineage search request:

curl 'https://localhost:8443/nifi-api/provenance/lineage' -H 'Content-Type:
application/json' -H 'Accept: application/json, text/javascript, */*;
q=0.01' --data-binary
'{"lineage":{"request":{"lineageRequestType":"FLOWFILE","uuid":"05f547fe-e15d-4186-a645-fbed6621aee1","eventId":"0"}}}'
--compressed

The contents of this request comes from the details of a selected
provenance event. I'd suggest using your browser's Dev Tools to watch the
requests in action while you are searching provenance and viewing lineage
graphs.

Let me know if you have any other questions. Thanks!

Matt

On Wed, Nov 16, 2016 at 11:15 AM,  wrote:

> Hello Matt, I have the same problem with the provenance/lineage REST call
>
>
>
> curl 'http://localhost:8080/nifi-api/provenance/lineage/..
>
>
>
> very difficult for me (sorry L ) to  find the right  json  body to send
> in the POST call .
>
> if you can give an example again
>
>
>
> Philippe
>
> Best regards
>
>
> *-*
>
> Thanks a lot Matt
>
> It works nicely !
>
>
>
>
>
> *From:* Matt Gilman [mailto:matt.c.gil...@gmail.com
> ]
> *Sent:* mardi 15 novembre 2016 15:08
> *To:* users@nifi.apache.org
> *Subject:* Re: REST provenance api search options
>
>
>
> Philippe,
>
>
>
> Here's an example command for initiating a provenance search:
>
>
>
> curl 'http://localhost:8080/nifi-api/provenance' -H 'Content-Type:
> application/json' -H 'Accept: application/json, text/javascript, */*;
> q=0.01' --data-binary '{"provenance":{"request":{"
> maxResults":1000,"startDate":"11/15/2016 00:00:00
> EST","endDate":"11/15/2016 23:59:59 EST","searchTerms":{"
> FlowFileUUID":"","Filename":" flowfile>","ProcessorID":""' --compressed
>
>
>
> The available searchable fields are defined in your nifi.properties file
> under the following properties:
>
>
>
> nifi.provenance.repository.indexed.fields
>
> nifi.provenance.repository.indexed.attributes
>
>
>
> Because the searches can be long running they are performed
> asynchronously. These means that the curl command above creates the search
> request but does not wait for it to complete. Instead, you'll need to get
> the uuid to the search request to continue to GET it until the search
> completes. Once completed, you should DELETE the search request. Open up
> the Dev Tools in your browser to seen this sequence of requests in action.
>
>
>
> Let me know if you have any more questions. Thanks.
>
>
>
> Matt
>
>
>
>
>
> On Tue, Nov 15, 2016 at 2:47 AM,  wrote:
>
> Hello,
>
> My SW context : nifi 1.0.0/ubuntu
>
>  I am trying to use the provenance search options . I have the Id of my
> processor ie (ProcessorID ) but it’s not very clear for me how to fill the
> searchableFields.
>
> Is something  as the following to be right  ? and in this case where do i
> put the ProcessorID? in the field or  in the Id ? and what for label?
>
> ---
>
> curl -i -X PUT -H 'Content-Type: application/json' -d
> '{"searchableFields":[{"id":"ProcessorID","field":"processorId","label":"Component
> ID","type":"STRING"}]}
>
> ' http://localhost:8080/nifi-api/provenance/search-options’
>
> --
>
> ​ please can you clarify ? with an example ?
>
> Best regards
>
> phil
>
>
>
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.
>
>


NiFi versions and remote process groups

2016-11-16 Thread Russell Bateman
Should I be able to wire a remote process group on a NiFi 1.0.0 canvas 
to an input port on a 0.7.1 canvas? (Neither is a cluster.)


I have an input port in 0.7.1, but when I click the "circle-arrow" icon 
on my NiFi 1.0.0 remote process group and drag it, I get


'NiFi Flow' does not have any output ports (true, it doesn't)
'NiFi Flow' does not have any input ports (false, there is one).

When I try to connect the output of a processor in my NiFi 1.0.0 
instance, I also get

'NiFi Flow' does not have any input ports.

I presume 'NiFi Flow' refers to my running NiFi 0.7.1 instance (since 
that's its name in the title bar of the browser).


I've examined:

   
https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Remote_Group_Transmission
   https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#site-to-site
   
https://community.hortonworks.com/articles/16461/nifi-understanding-how-to-use-process-groups-and-r.html

Thanks.


query a blob field on db2

2016-11-16 Thread Kaggal, Vinod C.
Hello!

We have been using nifi successfully for a few things and can attest that this 
is a great tool to have in our tool box!

Recently, we were trying to query a blob field from db2 which resulted in an 
exception.

Any recommendation on how to address this would be appreciated!

Query:
SELECT BLOBTBL.BLOB_CONTENTS FROM CE_BLOB_DECOMP BLOBTBL fetch first 10 rows 
only with UR


Exception:

ExecuteSQL[id=93bdd730-65ba-4deb-b6a4-78a551916d1d] Unable to execute SQL 
select query SELECT BLOBTBL.BLOB_CONTENTS FROM CE_BLOB_DECOMP BLOBTBL fetch 
first 10 rows only with UR due to 
org.apache.nifi.processor.exception.ProcessException: com.ibm.db2.jcc.am.jo: 
[jcc][t4][1092][11638][3.57.110] Invalid data conversion: Wrong result column 
type for requested conversion. ERRORCODE=-4461, SQLSTATE=42815; routing to 
failure: org.apache.nifi.processor.exception.ProcessException: 
com.ibm.db2.jcc.am.jo: [jcc][t4][1092][11638][3.57.110] Invalid data 
conversion: Wrong result column type for requested conversion. ERRORCODE=-4461, 
SQLSTATE=42815


Thank you!
Vinod



REST provenance/lineage API

2016-11-16 Thread philippe.gibert
Hello Matt, I have the same problem with the provenance/lineage REST call

curl 'http://localhost:8080/nifi-api/provenance/lineage/..

very difficult for me (sorry ☹ ) to  find the right  json  body to send in the 
POST call .
if you can give an example again

Philippe
Best regards
-
Thanks a lot Matt
It works nicely !


From: Matt Gilman [mailto:matt.c.gil...@gmail.com]
Sent: mardi 15 novembre 2016 15:08
To: users@nifi.apache.org
Subject: Re: REST provenance api search options

Philippe,

Here's an example command for initiating a provenance search:

curl 'http://localhost:8080/nifi-api/provenance' -H 'Content-Type: 
application/json' -H 'Accept: application/json, text/javascript, */*; q=0.01' 
--data-binary 
'{"provenance":{"request":{"maxResults":1000,"startDate":"11/15/2016 00:00:00 
EST","endDate":"11/15/2016 23:59:59 
EST","searchTerms":{"FlowFileUUID":"","Filename":"","ProcessorID":""' --compressed

The available searchable fields are defined in your nifi.properties file under 
the following properties:

nifi.provenance.repository.indexed.fields
nifi.provenance.repository.indexed.attributes

Because the searches can be long running they are performed asynchronously. 
These means that the curl command above creates the search request but does not 
wait for it to complete. Instead, you'll need to get the uuid to the search 
request to continue to GET it until the search completes. Once completed, you 
should DELETE the search request. Open up the Dev Tools in your browser to seen 
this sequence of requests in action.

Let me know if you have any more questions. Thanks.

Matt


On Tue, Nov 15, 2016 at 2:47 AM, 
> wrote:

Hello,

My SW context : nifi 1.0.0/ubuntu

 I am trying to use the provenance search options . I have the Id of my 
processor ie (ProcessorID ) but it’s not very clear for me how to fill the 
searchableFields.

Is something  as the following to be right  ? and in this case where do i put 
the ProcessorID? in the field or  in the Id ? and what for label?

---

curl -i -X PUT -H 'Content-Type: application/json' -d 
'{"searchableFields":[{"id":"ProcessorID","field":"processorId","label":"Component
 ID","type":"STRING"}]}

' http://localhost:8080/nifi-api/provenance/search-options’

--

​ please can you clarify ? with an example ?

Best regards

phil



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.



Re: puttcp

2016-11-16 Thread Raf Huys
I was wrong about it not resuming after connection reset. It does.

I just overlooked, and as I don't have rights to comment in Nabble, I had
to wait for an email to reply on...

Anyway, thanks for clearing out the socket buffer thing!

On Wed, Nov 16, 2016 at 3:31 PM, Joe Witt  wrote:

> In 1.x back pressure happens by default so if that is what is being used
> it is probably why the flow appears stopped.
>
> In the 1.1 release this will he visually more obvious.
>
> Thanks
> Joe
>
> On Nov 16, 2016 9:29 AM, "Bryan Bende"  wrote:
>
>> Hello Raf,
>>
>> The message about attempting to set the socket buffer size is not really
>> an error that would prevent anything from working, it is just a warning so
>> that the user knows that NiFI created a new connection and tried to set the
>> Socket's receive buffer to some value that was specified in the properties
>> (1 MB in your case), and the OS only let it get set to some smaller value,
>> which could be important for someone trying to tune various settings.
>>
>> This happens whenever a new connection is created, which likely happens
>> your TCP servers goes down and comes back up. It can also happen during
>> normal operation of the processor, PutTCP will create connections on the
>> fly as needed and then close them if they have been used in the amount of
>> time greater than "Idle Connection Expiration".
>>
>> I definitely agree it would be nice for that message to not print all the
>> time though, one way to get rid of it would be to reduce the value of "Max
>> Size of Socket Send Buffer" to meet what the OS is allowing, another way
>> would be to configure logback.xml so that org.apache.nifi.processor.util
>> .put.sender.SocketChannelSender only logged at the ERROR level, since
>> this message is logged at the WARN level, but this means you could miss
>> other valuable warnings.
>>
>> When you say "some time later it becomes worse... no flowfiles are
>> generated", are you saying GenerateFlowFile is no longer generating flow
>> files? when this happens do you have a lot of flow files in queues, and do
>> you have back-pressure configured?
>>
>> Thanks,
>>
>> Bryan
>>
>>
>> On Wed, Nov 16, 2016 at 5:59 AM, Raf Huys  wrote:
>>
>>> I'm having a simple flow GenerateFlowfile -> ReplaceText -> PutTCP which
>>> should establish a TCP connection and send a small piece of text over that
>>> connection every 60 seconds. This is established bij scheduling the first
>>> processor as a cron job.
>>>
>>> This pipeline works, until I start restarting our TCP server upstream.
>>>
>>>
>>> What can happen is that the TCP server is unavailable (due to
>>> reasons...). Nevertheless, when the TCP server becomes available again, the
>>> Nifi pipeline should continue doing it's job.
>>>
>>> However, what I observe is that the PutTCP processor start throwing the
>>> following errors after the TCP became unavailable, and then available again:
>>>
>>> Attempted to set Socket Buffer Size to ... bytes but could only set to
>>> ... bytes. You may want to consider changing the Operating System's maximum
>>> receive buffer
>>>
>>> I find this message confusing because the messages we are sending are
>>> about 10 characters wide. Also, the time between between stopping/starting
>>> the TCP server is a couple of seconds, which means there is no backpressure
>>> of unsent flowfiles.
>>>
>>> Properties of the PutTCP processor
>>>
>>>
>>>- Hostname localhost
>>>- Port 4001
>>>- Max Size of Socket Send Buffer 1 MB
>>>- Idle Connection Expiration 5 seconds
>>>- Connection Per FlowFile true
>>>- Outgoing Message Delimiter \r\n
>>>- Timeout 10 seconds
>>>- SSL Context Service No value set
>>>- Character Set UTF-8
>>>
>>> I would love some help here.
>>>
>>> PS: Some time later it becomes worse, as every processor in the above
>>> pipeline actually stops doing anything...no flowfiles are generated, no
>>> errors are thrown...
>>>
>>> Thanks,
>>>
>>> Raf Huys
>>>
>>
>>


-- 
Mvg,

Raf Huys


Re: puttcp

2016-11-16 Thread Joe Witt
In 1.x back pressure happens by default so if that is what is being used it
is probably why the flow appears stopped.

In the 1.1 release this will he visually more obvious.

Thanks
Joe

On Nov 16, 2016 9:29 AM, "Bryan Bende"  wrote:

> Hello Raf,
>
> The message about attempting to set the socket buffer size is not really
> an error that would prevent anything from working, it is just a warning so
> that the user knows that NiFI created a new connection and tried to set the
> Socket's receive buffer to some value that was specified in the properties
> (1 MB in your case), and the OS only let it get set to some smaller value,
> which could be important for someone trying to tune various settings.
>
> This happens whenever a new connection is created, which likely happens
> your TCP servers goes down and comes back up. It can also happen during
> normal operation of the processor, PutTCP will create connections on the
> fly as needed and then close them if they have been used in the amount of
> time greater than "Idle Connection Expiration".
>
> I definitely agree it would be nice for that message to not print all the
> time though, one way to get rid of it would be to reduce the value of "Max
> Size of Socket Send Buffer" to meet what the OS is allowing, another way
> would be to configure logback.xml so that org.apache.nifi.processor.
> util.put.sender.SocketChannelSender only logged at the ERROR level, since
> this message is logged at the WARN level, but this means you could miss
> other valuable warnings.
>
> When you say "some time later it becomes worse... no flowfiles are
> generated", are you saying GenerateFlowFile is no longer generating flow
> files? when this happens do you have a lot of flow files in queues, and do
> you have back-pressure configured?
>
> Thanks,
>
> Bryan
>
>
> On Wed, Nov 16, 2016 at 5:59 AM, Raf Huys  wrote:
>
>> I'm having a simple flow GenerateFlowfile -> ReplaceText -> PutTCP which
>> should establish a TCP connection and send a small piece of text over that
>> connection every 60 seconds. This is established bij scheduling the first
>> processor as a cron job.
>>
>> This pipeline works, until I start restarting our TCP server upstream.
>>
>>
>> What can happen is that the TCP server is unavailable (due to
>> reasons...). Nevertheless, when the TCP server becomes available again, the
>> Nifi pipeline should continue doing it's job.
>>
>> However, what I observe is that the PutTCP processor start throwing the
>> following errors after the TCP became unavailable, and then available again:
>>
>> Attempted to set Socket Buffer Size to ... bytes but could only set to
>> ... bytes. You may want to consider changing the Operating System's maximum
>> receive buffer
>>
>> I find this message confusing because the messages we are sending are
>> about 10 characters wide. Also, the time between between stopping/starting
>> the TCP server is a couple of seconds, which means there is no backpressure
>> of unsent flowfiles.
>>
>> Properties of the PutTCP processor
>>
>>
>>- Hostname localhost
>>- Port 4001
>>- Max Size of Socket Send Buffer 1 MB
>>- Idle Connection Expiration 5 seconds
>>- Connection Per FlowFile true
>>- Outgoing Message Delimiter \r\n
>>- Timeout 10 seconds
>>- SSL Context Service No value set
>>- Character Set UTF-8
>>
>> I would love some help here.
>>
>> PS: Some time later it becomes worse, as every processor in the above
>> pipeline actually stops doing anything...no flowfiles are generated, no
>> errors are thrown...
>>
>> Thanks,
>>
>> Raf Huys
>>
>
>


Re: puttcp

2016-11-16 Thread Bryan Bende
Hello Raf,

The message about attempting to set the socket buffer size is not really an
error that would prevent anything from working, it is just a warning so
that the user knows that NiFI created a new connection and tried to set the
Socket's receive buffer to some value that was specified in the properties
(1 MB in your case), and the OS only let it get set to some smaller value,
which could be important for someone trying to tune various settings.

This happens whenever a new connection is created, which likely happens
your TCP servers goes down and comes back up. It can also happen during
normal operation of the processor, PutTCP will create connections on the
fly as needed and then close them if they have been used in the amount of
time greater than "Idle Connection Expiration".

I definitely agree it would be nice for that message to not print all the
time though, one way to get rid of it would be to reduce the value of "Max
Size of Socket Send Buffer" to meet what the OS is allowing, another way
would be to configure logback.xml so that
org.apache.nifi.processor.util.put.sender.SocketChannelSender only logged
at the ERROR level, since this message is logged at the WARN level, but
this means you could miss other valuable warnings.

When you say "some time later it becomes worse... no flowfiles are
generated", are you saying GenerateFlowFile is no longer generating flow
files? when this happens do you have a lot of flow files in queues, and do
you have back-pressure configured?

Thanks,

Bryan


On Wed, Nov 16, 2016 at 5:59 AM, Raf Huys  wrote:

> I'm having a simple flow GenerateFlowfile -> ReplaceText -> PutTCP which
> should establish a TCP connection and send a small piece of text over that
> connection every 60 seconds. This is established bij scheduling the first
> processor as a cron job.
>
> This pipeline works, until I start restarting our TCP server upstream.
>
>
> What can happen is that the TCP server is unavailable (due to reasons...).
> Nevertheless, when the TCP server becomes available again, the Nifi
> pipeline should continue doing it's job.
>
> However, what I observe is that the PutTCP processor start throwing the
> following errors after the TCP became unavailable, and then available again:
>
> Attempted to set Socket Buffer Size to ... bytes but could only set to ...
> bytes. You may want to consider changing the Operating System's maximum
> receive buffer
>
> I find this message confusing because the messages we are sending are
> about 10 characters wide. Also, the time between between stopping/starting
> the TCP server is a couple of seconds, which means there is no backpressure
> of unsent flowfiles.
>
> Properties of the PutTCP processor
>
>
>- Hostname localhost
>- Port 4001
>- Max Size of Socket Send Buffer 1 MB
>- Idle Connection Expiration 5 seconds
>- Connection Per FlowFile true
>- Outgoing Message Delimiter \r\n
>- Timeout 10 seconds
>- SSL Context Service No value set
>- Character Set UTF-8
>
> I would love some help here.
>
> PS: Some time later it becomes worse, as every processor in the above
> pipeline actually stops doing anything...no flowfiles are generated, no
> errors are thrown...
>
> Thanks,
>
> Raf Huys
>


puttcp

2016-11-16 Thread Raf Huys
I'm having a simple flow GenerateFlowfile -> ReplaceText -> PutTCP which
should establish a TCP connection and send a small piece of text over that
connection every 60 seconds. This is established bij scheduling the first
processor as a cron job.

This pipeline works, until I start restarting our TCP server upstream.


What can happen is that the TCP server is unavailable (due to reasons...).
Nevertheless, when the TCP server becomes available again, the Nifi
pipeline should continue doing it's job.

However, what I observe is that the PutTCP processor start throwing the
following errors after the TCP became unavailable, and then available again:

Attempted to set Socket Buffer Size to ... bytes but could only set to ...
bytes. You may want to consider changing the Operating System's maximum
receive buffer

I find this message confusing because the messages we are sending are about
10 characters wide. Also, the time between between stopping/starting the
TCP server is a couple of seconds, which means there is no backpressure of
unsent flowfiles.

Properties of the PutTCP processor


   - Hostname localhost
   - Port 4001
   - Max Size of Socket Send Buffer 1 MB
   - Idle Connection Expiration 5 seconds
   - Connection Per FlowFile true
   - Outgoing Message Delimiter \r\n
   - Timeout 10 seconds
   - SSL Context Service No value set
   - Character Set UTF-8

I would love some help here.

PS: Some time later it becomes worse, as every processor in the above
pipeline actually stops doing anything...no flowfiles are generated, no
errors are thrown...

Thanks,

Raf Huys


RE: Nifi- PutEmail processor issue

2016-11-16 Thread Gadiputi, Sravani
Thanks Joe and Oleg for your inputs.

The issue is mainly sending the email through PutEmail.
Irrespective of the flow success or failure, the mail should trigger.
In this case, when putemail receives intimation to trigger mail to the 
corresponding recipient, at that time it is failing due to connection timed 
out. I think it's because of network availability or network traffic.

It's purely about putemail processor not about transferring the file.I Provided 
correct host,username..etc details also.

We can partially overcome this issue with the help of Re-try concept.
Whenever putemail encountered the error, it will keep on try to connect for 
number of times which is set by us using expression language. In-between 
putemail should connect to host and send the email.

What I am looking is there any other way in Putemail process itself to overcome 
this issue.

Thanks,
Sravani




-Original Message-
From: Joe Witt [mailto:joe.w...@gmail.com] 
Sent: Tuesday, November 15, 2016 6:54 PM
To: users@nifi.apache.org
Cc: jtsw...@gmail.com
Subject: Re: Nifi- PutEmail processor issue

Sravani,

The flow you describe makes sense.  So now lets focus on the PutEmail 
processor.  Can you please clarify if when it fails to connect the flowfile is 
transferred to the 'failure' relationship or the 'success'
relationship?  If it flows to the 'failure' relationship then as Conrad points 
out you can just simply have failure loop back to that processor so it will 
continue to retry until it actually succeeds in transferring.  If it flows to 
the 'success' relationship even though the email send fails then we need to 
look into it further and the stack trace oleg requests will be helpful.

Thanks
Joe

On Tue, Nov 15, 2016 at 7:47 AM, Oleg Zhurakousky 
 wrote:
> Sravani
>
> Would you be able to provide a full stack trace of the connection exception.
> Also, while I assume you are providing the correct connection 
> properties (i.e., host,port etc) I would still recommend to check the 
> they are correct, but in any event the full stack trace would 
> definitely help and you cn find it the the wifi app logs.
>
> Cheers
> Oleg
>
> On Nov 15, 2016, at 4:07 AM, Gadiputi, Sravani 
>  wrote:
>
> Thank you for your reply.
>
> My requirement is , I just try to send/copy the 3 different files from 
> source to destination through Nifi, and these jobs runs weekly once.
> So I wanted to know which file is successfully moved through email.
> In this process, I have configured putemail for each flow. There are 
> hardly
> 3 notifications only.
> Though files have been moved to destination, we could not receive the 
> notifications properly and giving the below error.
>
> Please suggest.
>
> Thanks,
> Sravani
>
>
> From: Jeff [mailto:jtsw...@gmail.com]
> Sent: Tuesday, November 15, 2016 1:25 PM
> To: users@nifi.apache.org
> Subject: Re: Nifi- PutEmail processor issue
>
> Hello Sravani,
>
> Could it be possible that the SMTP server you're using is denying 
> connections due to the volume of emails your flow might be sending?  
> How many emails are sent per flow file, and how many emails do you 
> estimate are sent per minute?
>
> If this is the case, you can modify your flow to aggregate flowfiles 
> with a processor like MergeContent so that you can send emails that 
> resemble a digest, rather than a separate email for each flowfile that 
> moves through your flow.
>
> On Mon, Nov 14, 2016 at 11:59 PM Gadiputi, Sravani 
>  wrote:
>
>
> Hi,
>
> I have used PutEmail processor in my project to send email 
> notification for successful/failure copying of a files.
> Each file flow having corresponding PutEmail to send  email 
> notification to respective recipients.
>
> Here the issue is, sometimes email notification will send to 
> respective recipients successfully  for successful/failure job.
> But sometimes for any one specific job email notification will not be 
> send to recipients though job is successful, due to  below error.
>
> Error:
>
> Could not connect to SMTP host
> Java.net.ConnectException: Connection timed out
>
> Could you please suggest me how we can overcome this error.
>
>
> Thanks,
> Sravani
>
> This message contains information that may be privileged or 
> confidential and is the property of the Capgemini Group. It is 
> intended only for the person to whom it is addressed. If you are not 
> the intended recipient, you are not authorized to read, print, retain, 
> copy, disseminate, distribute, or use this message or any part 
> thereof. If you receive this message in error, please notify the sender 
> immediately and delete all copies of this message.
>
>
This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient, you are not authorized 
to read, print, retain,