DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT <http://nagoya.apache.org/bugzilla/show_bug.cgi?id=12424>. ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND INSERTED IN THE BUG DATABASE.
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=12424 Out of memory when using tokenize Summary: Out of memory when using tokenize Product: XalanJ2 Version: 2.4Dx Platform: PC OS/Version: Windows NT/2K Status: NEW Severity: Critical Priority: Other Component: org.apache.xalan.lib AssignedTo: [EMAIL PROTECTED] ReportedBy: [EMAIL PROTECTED] Hi we heavylly use the tokenize but we on the large document (3M) we get Out of memory error. We have made changes into org.apache.xalan.lib.Extensions. The hotfix is attached: protected static DocumentBuilderFactory dbf = null; protected static DocumentBuilder db = null; protected static Document lDoc = null; public static NodeSet tokenize(ExpressionContext myContext, String toTokenize, String delims) { // Document lDoc; // Document lDoc = myContext.getContextNode().getOwnerDocument(); try { if (dbf == null) { dbf = DocumentBuilderFactory.newInstance(); db = dbf.newDocumentBuilder(); lDoc = db.newDocument(); } } catch(ParserConfigurationException pce) { throw new org.apache.xml.utils.WrappedRuntimeException(pce); } StringTokenizer lTokenizer = new StringTokenizer(toTokenize, delims); NodeSet resultSet = new NodeSet(); while (lTokenizer.hasMoreTokens()) { resultSet.addNode(lDoc.createTextNode(lTokenizer.nextToken())); } return resultSet; }
