Is there any reason to just use Java 8 so it's not an issue?


> On Aug 30, 2015, at 5:54 AM, Joe Witt <[email protected]> wrote:
> 
> Chris
> 
> Alan nailed it.  In bootstrap.conf just uncomment these two lines and
> restart nifi.
> 
> #java.arg.11=-XX:PermSize=128M
> #java.arg.12=-XX:MaxPermSize=128M
> 
> 
> But you are not the first run to run into this stumbling block so have
> created this jira [1].
> 
> Thanks
> Joe
> 
> [1] https://issues.apache.org/jira/browse/NIFI-909
> 
>> On Sun, Aug 30, 2015 at 2:58 AM, Alan Jackoway <[email protected]> wrote:
>> Have you tried setting the PermGen size with something like
>> -XX:MaxPermSize=512m ? If you look in the file bootstrap.conf you should see
>> an example of how to set it.
>> 
>> I ask because that error doesn't look specific to PutHDFS to me. If the real
>> problem is PermGen space, you may go through a lot of work trying to remove
>> that processor and then have this happen again when you add something else.
>> 
>> Let us know if bootstrap.conf works for you,
>> Alan
>> 
>>> On Aug 30, 2015 7:21 AM, "Chris Teoh" <[email protected]> wrote:
>>> 
>>> Hey folks,
>>> 
>>> I added PutHDFS into my workflow, pointed it to core-site.xml and
>>> hdfs-site.xml and it crashed. Restarting doesn't seem to help as the browser
>>> will say NiFi is loading, and then fails to load.
>>> 
>>> How do I remove the PutHDFS processor without the web UI?
>>> 
>>> Errors in the log are:-
>>> o.a.n.p.PersistentProvenanceRepository Failed to rollover Provenance
>>> repository due to java.lang.OutOfMemoryError: PermGen space
>>> 
>>> o.apache.nifi.processors.hadoop.PutHDFS
>>> [PutHDFS[id=4cc42c80-e37f-4122-a6f2-0c3b61ffc00b]] Failed to write to HDFS
>>> due to {} java.lang.OutOfMemoryError: PermGen space at
>>> java.lang.ClassLoader.defineClass1(Native Method) ~[na:1.7.0_67] at
>>> java.lang.ClassLoader.defineClass(ClassLoader.java:800) ~[na:1.7.0_67] at
>>> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>>> ~[na:1.7.0_67] at
>>> java.net.URLClassLoader.defineClass(URLClassLoader.java:449) ~[na:1.7.0_67]
>>> at java.net.URLClassLoader.access$100(URLClassLoader.java:71) ~[na:1.7.0_67]
>>> at java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[na:1.7.0_67] at
>>> java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_67] at
>>> java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_67] at
>>> java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_67] at
>>> java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_67] at
>>> java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_67] at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:755)
>>> ~[hadoop-hdfs-2.6.0.jar: na] at
>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_67]
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> ~[na:1.7.0_67] at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> ~[na:1.7.0_67] at java.lang.reflect.Method.invoke(Method.java:606)
>>> ~[na:1.7.0_67] at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>> ~[hadoop-common-2.6.0.jar:na] at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>> ~[hadoop-common-2.6.0.jar:na] at com.sun.proxy.$Proxy156.getFileInfo(Unknown
>>> Source) ~[na:na] at
>>> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
>>> ~[hadoop-hdfs-2.6.0.jar:na] at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
>>> ~[hadoop-hdfs-2.6.0.jar:na] at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
>>> ~[hadoop-hdfs-2.6.0.jar:na] at
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>> ~[hadoop-common-2.6.0.jar:na] at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
>>> ~[hadoop-hdfs-2.6.0.jar:na] at
>>> org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:232)
>>> ~[nifi-hdfs-processors-0.2.1.jar:0.2.1] at
>>> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>>> ~[nifi-api-0.2.1.jar:0.2.1] at
>>> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1077)
>>> ~[na:na] at
>>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:127)
>>> ~[na:na] at
>>> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49)
>>> ~[na:na] at
>>> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119)
>>> ~[na:na] at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>> ~[na:1.7.0_67] at
>>> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>>> ~[na:1.7.0_67]

Reply via email to