Thanks Joe for checking. Yes, I got past it and I was successfully able to
demo it to the team :) Now, the next challenge is to drive the performance
out of nifi for the high throughput.
On Mon, Oct 31, 2016 at 7:08 PM, Joe Witt wrote:
> Krish,
>
> Did you ever get past
Krish,
Did you ever get past this?
Thanks
Joe
On Fri, Oct 28, 2016 at 2:36 PM, Gop Krr wrote:
> James, permission issue got resolved. I still don't see any write.
>
> On Fri, Oct 28, 2016 at 10:34 AM, Gop Krr wrote:
>>
>> Thanks James.. I am looking into
Thanks James.. I am looking into permission issue and update the thread. I
will also make the changes as you per your recommendation.
On Fri, Oct 28, 2016 at 10:23 AM, James Wing wrote:
> From the screenshot and the error message, I interpret the sequence of
> events to be
>From the screenshot and the error message, I interpret the sequence of
events to be something like this:
1.) ListS3 succeeds and generates flowfiles with attributes referencing S3
objects, but no content (0 bytes)
2.) FetchS3Object fails to pull the S3 object content with an Access Denied
error,
Thanks Bryan, Joe, Adam and Pierre. I went past this issue by switching to
0.71. Now it is able to list the files from buckets and create those files
in the another bucket. But write is not happening and I am getting the
permission issue ( I have attached below for the reference) Could this be
Quick remark: the fix has also been merged in master and will be in release
1.1.0.
Pierre
2016-10-28 15:22 GMT+02:00 Gop Krr :
> Thanks Adam. I will try 0.7.1 and update the community on the outcome. If
> it works then I can create a patch for 1.x
> Thanks
> Rai
>
> On Thu,
Thanks Adam. I will try 0.7.1 and update the community on the outcome. If
it works then I can create a patch for 1.x
Thanks
Rai
On Thu, Oct 27, 2016 at 7:41 PM, Adam Lamar wrote:
> Hey All,
>
> I believe OP is running into a bug fixed here:
>
Looking at this line [1] makes me think the FetchS3 processor is
properly streaming the bytes directly to the content repository.
Looking at the screenshot showing nothing out of the ListS3 processor
makes me think the bucket has so many things in it that the processor
or associated library isn't
moving dev to bcc
Yes I believe the issue here is that FetchS3 doesn't do chunked
transfers and so is loading all into memory. I've not verified this
in the code yet but it seems quite likely. Krish if you can verify
that going with a larger heap gets you in the game can you please file
a JIRA.
Hello,
Are you running with all of the default settings?
If so you would probably want to try increasing the memory settings in
conf/bootstrap.conf.
They default to 512mb, you may want to try bumping it up to 1024mb.
-Bryan
On Thu, Oct 27, 2016 at 5:46 PM, Gop Krr wrote:
10 matches
Mail list logo