I have 500+ HTTP request and that will return files which has various size
that will be stored into s3..
For each http (oai-pmh) request we will get file to put into s3.
So content repository keep on increasing for the file size. One sudden
point it reaches 4.6 GB and that's the avaible disk
Hi Selvam,
As mentioned, please keep messages to the one list. Moving dev to bcc
again.
Archiving is only applicable for that content which has exited the flow and
is not referenced by any FlowFiles currently in your processing graph,
similar to garbage collection in Java. For this particular
Hello
Please only post to one list. I have moved 'dev@nifi' to bcc.
In the docs for this processor [1] you'll find reference to "Multipart
Part Size". Set that to a smaller value appropriate for your JVM
memory settings. For instance, if you have a default JVM heap size of
512MB you'll want
This is the exact error.
On Tue, Sep 20, 2016 at 4:30 PM, Selvam Raman wrote:
> HI,
>
> I am pushing data to s3 using puts3object. I have setup nifi 1.0 zero
> master cluster.
>
> Ec2 instance having only 8GB of hard disk. Content repository writing till
> 4.6 gb of data