Hi Mark,
I’m using 0.5.1 and yes I am using DistributedMapCacheClientService (that is 
the only option isn’t it?).
Deleting the local file I guess is the piece that I’m missing – recreating the 
processor therefore would have a different id and I suppose this is the reason 
for it starting after that.

I understand why it would be an edge case to be able to clear the cache for 
this processor, but there are times when a major balls up requires one to 
reprocess a load of files in HDFS!

Thanks, I’ll look out for 0.6

Conrad

From: Mark Payne <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Wednesday, 16 March 2016 at 12:36
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Re: ListHDFS cache (again)

Conrad,

If you are using a version that used DistributedMapCacheClientService, it also 
was saving state in a local file,
$NIFI_HOME/conf/state/<processor uuid>

You would need to delete that file, as well. I am not sure though if it would 
notice that the file was deleted
without restarting NiFi. That processor wasn't really designed with the intent 
of allowing the user to clear
the cache.

That said, in NiFi 0.6.0, which should be coming very soon (hopefully next 
week?) it has been updated
to allow users to very easily view and reset the state. Hopefully that will 
ease the pain for you.

Thanks
-Mark

On Mar 16, 2016, at 4:33 AM, Conrad Crampton 
<[email protected]<mailto:[email protected]>> wrote:

Hi,
The subject of invalidating/ resetting the cache/ forcing new listing returned 
from ListHDFS was raised back in December 2015 [1] with the ‘resolution’ to 
delete the backing DistributedMapCacheClientService and add a different one. I 
have tried this – recreated completely from scratch, different name, different 
node, different port – all manner of combinations, but still the listing 
returned from ListHDFS starts where it left off. Satanic forces are at work 
here on my cluster I’m sure!!!

The odd thing is that there was an moment when I ran the flow where it did 
start from scratch but I can’t remember the circumstances of this. I’ve tried 
rebooting the NCM to no avail.

Any further suggestions to those in [1] as this is a bit of a show stopper as I 
can’t get the data into the flow I want no matter how I try.

I have got this working again, but only by deleting the processor and 
recreating. A bit of a faff so would be interested if anyone else had same 
problem and any tips on a better workaround than mine.

Thanks
Conrad

[1] 
https://mail-archives.apache.org/mod_mbox/nifi-users/201512.mbox/%3CCAEf2RqDGoTGkBd4dnLhuawPr4oOmFF7rRkrv_=ae8u2rq6o...@mail.gmail.com%3E


SecureData, combating cyber threats

________________________________

The information contained in this message or any of its attachments may be 
privileged and confidential and intended for the exclusive use of the intended 
recipient. If you are not the intended recipient any disclosure, reproduction, 
distribution or other dissemination or use of this communications is strictly 
prohibited. The views expressed in this email are those of the individual and 
not necessarily of SecureData Europe Ltd. Any prices quoted are only valid if 
followed up by a formal written quote.

SecureData Europe Limited. Registered in England & Wales 04365896. Registered 
Address: SecureData House, Hermitage Court, Hermitage Lane, Maidstone, Kent, 
ME16 9NT




***This email originated outside SecureData***

Click 
here<https://www.mailcontrol.com/sr/3btdVV2+MQvGX2PQPOmvUg5!3elP1qzzaasieKVLFyIFErGdCBN8EnXT9xYndTKV1Buv4e!TI0sJwVIOzJj0SQ==>
 to report this email as spam.

Reply via email to