at 11:24 AM, Alan Jackoway <al...@cloudera.com> wrote:
> Update: session.remove(newFiles) does not work. I filed
> https://issues.apache.org/jira/browse/NIFI-3205
>
> On Thu, Dec 15, 2016 at 11:05 AM, Alan Jackoway <al...@cloudera.com>
> wrote:
>
>> I am g
Update: session.remove(newFiles) does not work. I filed
https://issues.apache.org/jira/browse/NIFI-3205
On Thu, Dec 15, 2016 at 11:05 AM, Alan Jackoway <al...@cloudera.com> wrote:
> I am getting the successfully checkpointed message.
>
> I think I figured this out. Now we have to
ile repo
> is checkpointing? Would have the words "Successfully checkpointed FlowFile
> Repository"
> That should happen every 2 minutes, approximately.
>
>
> On Dec 14, 2016, at 8:56 PM, Alan Jackoway <al...@cloudera.com<mailto:ala
> n...@cloudera.com>> wrote:
&
ileNotFound" as well as
> "NoSuchFile" and see if that hits anywhere.
>
>
> [1] http://nifi.apache.org/docs/nifi-docs/html/administration-
> guide.html#configuration-best-practices
>
>
> On Dec 14, 2016, at 8:25 PM, Alan Jackoway <al...@cloudera.com<mailto:
We haven't let the disk hit 100% in a while, but it's been crossing 90%. We
haven't seen the "Unable to checkpoint" message in the last 24 hours.
$ ulimit -Hn
4096
$ ulimit -Sn
1024
I will work on tracking a specific file next.
On Wed, Dec 14, 2016 at 8:17 PM, Alan Jackoway <al...
number
of processors and transformations.
Should the number of drained claims correspond to the number of flow files
that moved through the system?
Alan
On Wed, Dec 14, 2016 at 6:59 PM, Alan Jackoway <al...@cloudera.com> wrote:
> Some updates:
> * We fixed the issue with missing tran
Some updates:
* We fixed the issue with missing transfer relationships, and this did not
go away.
* We saw this a few minutes ago when the queue was at 0.
What should I be looking for in the logs to figure out the issue?
Thanks,
Alan
On Mon, Dec 12, 2016 at 12:45 PM, Alan Jackoway <
ger"
> level="DEBUG" />
> Then that should allow you to see a DEBUG-level log message every time
> that a Resource Claim is marked destructible and every time
> that the Content Repository requests the collection of Destructible Claims
> ("Drained 100 des
ers on code/configs for that would
be appreciated.
Thanks,
Alan
On Sun, Dec 11, 2016 at 8:51 AM, Joe Gresock <jgres...@gmail.com> wrote:
> No, in my scenario a server restart would not affect the content repository
> size.
>
> On Sun, Dec 11, 2016 at 8:46 AM, Alan Jackoway <a
.
On Sun, Dec 11, 2016 at 1:37 AM, Alan Jackoway <al...@cloudera.com> wrote:
> The scenario Joe G describes is almost exactly what we are doing. We bring
> in large files and unpack them into many smaller ones. In the most recent
> iteration of this problem, I saw that we had many sma
mean that you have data backlogged in the flow that account for that
> > much space. If that is certainly not the case for you then we need to
> > dig deeper. If you could do screenshots or share log files and stack
> > dumps around this time those would all be helpful. If the
haven't dug into what that means.
Alan
On Fri, Dec 9, 2016 at 9:53 PM, Alan Jackoway <al...@cloudera.com> wrote:
> Hello,
>
> We have a node on which nifi content repository keeps growing to use 100%
> of the disk. It's a relatively high-volume process. It chewed through more
> th
Hello,
We have a node on which nifi content repository keeps growing to use 100%
of the disk. It's a relatively high-volume process. It chewed through more
than 100GB in the three hours between when we first saw it hit 100% of the
disk and when we just cleaned it up again.
We are running nifi
Builds with JDK 1.8.0_60 so I am a +0 in total. It's been a long time
(maybe 0.5?) since I attempted a full nifi build rather than just building
the processors I am working with. It seems like some time in that window a
JDK incompatibility was introduced.
On Mon, Aug 29, 2016 at 2:58 PM, Alan
-1 (non-binding) from me, but likely an issue on my machine.
I get a compile error in nifi-framework-cluster on this release and master
using an empty maven repository.
Mac OS X 10.10.5. Maven 3.2.5. Oracle Java 1.8.0_31
The end of maven debug output (which doesn't even say which file is
We have begun using a workaround for a problem similar to this, but it is
fairly ugly. In many cases we really want to run something like an
ingestion process from an external system at a specific time on one node.
Without https://issues.apache.org/jira/browse/NIFI-401 you can't quite do
it.
What
] to enhance this functionality. Perhaps this can be
> better documented in the Admin Guide [2].
> >>
> >> Can you also provide the full stacktrace and your system configuration,
> if possible, to help with the troubleshooting? Thank you.
> >>
> >> [1] ht
Though I love the concept of ☃ as your separator, my belief is that the
correct way to do this to replace your custom delimiter with the ones that
are defined in ASCII (and therefore extremely unlikely to appear in your
data): https://en.wikipedia.org/wiki/Delimiter#ASCII_delimited_text
That
I am not a committer, but I think that at a minimum another committer
should sign off on it. I don't mind if a different committer says "looks
good to me, you can merge that," but I don't think committers should put
their own code in without sign off.
On Tue, Nov 3, 2015 at 10:23 AM, Oleg
We should also check similar behavior for missing properties. I was
thinking about this today while modifying a custom processor and decided it
was safest to just make a new processor with the new properties.
I tried to find info in the developer guide but couldn't. It would be good
to document
20 matches
Mail list logo