[ http://issues.apache.org/jira/browse/VELOCITY-203?page=comments#action_12329723 ]
Will Glass-Husain commented on VELOCITY-203: -------------------------------------------- Responding to this old bug. If you or someone else wants to write a patch, I'll commit it. Even a self-contained test case would be helpful. Thanks. > No upper limit on cached file handles causes random ResourceLoader exceptions > ----------------------------------------------------------------------------- > > Key: VELOCITY-203 > URL: http://issues.apache.org/jira/browse/VELOCITY-203 > Project: Velocity > Type: Bug > Components: Texen > Versions: 1.0-Release > Environment: Operating System: other > Platform: Other > Reporter: Ian Ragsdale > Assignee: Velocity-Dev List > > The Generator class caches filehandles while generating files. There is no > upper limit on the number of filehandles it will cache, so when generating > many > files with the same generator, it eventually hits the per-process limits on > the > number of open files. This then causes failures in the resource loader > because > it cannot open any more files. I've seen this problem on OS X and Linux, but > it > should be a pretty universal problem. > You can work around this problem by increasing the number of filehandles > available to the ant task, but that isn't always easy to do when running it > as a > subtask from an IDE, and it should be fairly simple to implement a basic LRU > scheme. > An alternative fix would be to more accurately report the error - the current > implementation just throws a ResourceNotFound exception, making it very hard > to > track down the root cause of the error. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
