[ http://nagoya.apache.org/jira/browse/XERCESC-477?page=history ]

Alberto Massari updated XERCESC-477:
------------------------------------

    Priority: Major

> Memory Leak in SAX Parser
> -------------------------
>
>          Key: XERCESC-477
>          URL: http://nagoya.apache.org/jira/browse/XERCESC-477
>      Project: Xerces-C++
>         Type: Bug
>   Components: SAX/SAX2
>     Versions: 1.6.0
>  Environment: Operating System: Solaris
> Platform: Sun
>     Reporter: Thomas
>     Assignee: Xerces-C Developers Mailing List

>
> I am experiencing a large memory leak using the Xerces 1.6 SAX Parser on a 
> Solaris 8.0 machine. The O.S. is installed using 32 bit mode. The code is 
> single threaded, built with -DAPP_NO_THREADS directive. 
>  
> Schema and Namespaces are set to false.  Validation is set to true. The 
> MemBufInputSource class is the source for the xml document. 
> The application is initialized using the XMLPlatform::initialize() method and a 
> new Parser is created for each XML buffer that gets parsed.  After each use, 
> the XMLPlatform::terminate() method is called to force cleanup of any remaining 
> static data structures based upon recommendations found in the FAQ's. The 
> following code snippet illustrates how I am using the parser. 
> try {
>      XMLPlatformUtils::Initialize();
>    }catch (const XMLException& toCatch) {
>       ....
>    }
>     // create new parser for each request
>     SAXParser parser;
>     parser.setValidationScheme(valScheme);
>     parser.setDoNamespaces(doNamespaces);
>     parser.setDoSchema(doSchema);
>     parser.setDocumentHandler(&handler);
>     parser.setErrorHandler(&handler);
>      const char* appName = "AppName";
>      MemBufInputSource* memBufIS = new MemBufInputSource
>      ((const XMLByte*)ar->getMsg(), strlen(ar->getMsg())
>               , appName, false);
>     parser.parse(*memBufIS);
>     // use the terminate method based upon advice in Xerces FAQ
>     XMLInterface::terminate();
>  The process size grows rapidly as seen using the ps -efl command. The Forte 
> workshop environment, dbx, etc. , does not detect the leak, however. 
>   In an attempt to isolate the leak, the code was instrumented using the ioctl()  
> method to get the process byte size from the system process table.  Using this 
> primitive method, it is possible to see where the code is growing. 
>  It appears to be occuring somewhere inside the XMLScanner class. The best I can 
> do to isolate it is somewhere before the constructor of the XMLReader class. 
> The output shows the process size increases somewhere between where the 
> ReaderMgr creates the new XMLReader and the constructor of the XMLReader. The 
> following is taken from ReaderMgr.cpp.
>  439      try {
>    440          if (src.getEncoding())
>    441          {
>    442              cerr << ": creatReader: " << getProcSizeInBytes() << endl;
>    443              retVal = new XMLReader
>    444                  (
>    445                  src.getPublicId()
>    446                  , src.getSystemId()
>    447                  , newStream
>    448                  , src.getEncoding()
>    449                  , refFrom
>    450                  , type
>    451                  , source
>    452                  );
>    453            cerr << ": creatReader: " << getProcSizeInBytes() << endl;
>    454          }
>    455          else
>    456          {
>    457              cerr <<": creatReader: " << getProcSizeInBytes() << endl;
>    458              retVal = new XMLReader
>    459                  (
>    460                  src.getPublicId()
>    461                  , src.getSystemId()
>    462                  , newStream
>    463                  , refFrom
>    464                  , type
>    465                  , source
>    466                  );
>                cerr <<": creatReader: " << getProcSizeInBytes() << endl;
>   The size of the leak is not consistent either. The following data shows the 
> delta's in process size in between each request. Each request sends the same xml 
> message and gets the same response back.  The column on the left details the 
> size of the process and the column on the right details the change in size from 
> the previous request. 
>  16,850,944   0 
>  17,817,600   966,656 
>  18,006,016   188,416 
>  18,006,016   0 
>  18,006,016   0 
>  18,006,016   0 
>  18,104,320   98,304 
>  18,169,856   65,536 
>  18,194,432   24,576 
>  18,194,432   0 
>  18,194,432   0 
>  18,243,584   49,152 
>  18,259,968   16,384 
>  18,276,352   16,384 
>  18,276,352   0 
>  18,276,352   0 
>  18,276,352   0 
>  18,341,888   65,536 
>  18,341,888   0 
>  18,448,384   106,496 
>  18,538,496   90,112 
>  18,538,496   0 
>  18,538,496   0 
>  18,538,496   0 
>  18,538,496   0 
>  18,620,416   81,920 
>  18,644,992   24,576 
>  18,661,376   16,384 
>  18,677,760   16,384 
>  18,677,760   0 
>  18,677,760   0 
>  18,677,760   0 
>  18,677,760   0 
>  18,833,408   155,648 
>  18,833,408   0 
>  18,833,408   0 
>  18,833,408   0 
>  18,833,408   0 
>  18,833,408   0 
>  18,833,408   0 
>  18,874,368   40,960 
>  18,874,368   0 
>  18,874,368   0 
>  18,972,672   98,304 
>  18,972,672   0 
>  19,062,784   90,112 
>  19,079,168   16,384 
>  19,161,088   81,920 
>  19,161,088   0 
>  19,161,088   0 
>  19,218,432   57,344 
>  19,234,816   16,384 
>  19,251,200   16,384 
>  19,267,584   16,384 
>  19,267,584   0 
>  19,267,584   0 
>  19,267,584   0 
>  19,267,584   0 
>  19,267,584   0 
>  19,267,584   0 
>  19,267,584   0 
>  19,267,584   0 
>  19,333,120   65,536 
>  19,415,040   81,920 
>  19,415,040   0 
>  19,415,040   0 
>  19,480,576   65,536 
>  19,480,576   0 
>  19,570,588   90,012 
>  19,570,588   0 
>  19,668,992   98,404 
>  19,668,992   0 
>  19,668,992   0 
>  19,668,992   0 
>  19,775,488   106,496 
>  19,775,488   0 
>  19,841,024   65,536 
>  19,841,024   0 
>  19,841,024   0 
>  19,922,944   81,920 
>  19,947,520   24,576 
>  19,963,904   16,384 
>  19,980,288   16,384 
>  19,980,288   0 
>  19,980,288   0 
>  20,037,632   57,344 
>  20,054,016   16,384 
>  20,054,016   0 
>  20,054,016   0 
>  20,054,016   0 
>  20,054,016   0 
>  20,054,016   0 
>  20,119,552   65,536 
>  20,119,552   0 
>  20,160,512   40,960 
>  20,176,896   16,384 
>  20,193,280   16,384 
>  20,217,856   24,576 
>  20,234,240   16,384 
>  20,250,264   16,024 
>  20,250,264   0 
>  20,250,264   0 
>  20,307,968   57,704 
>       
>       
>  What is interesting is that the process size does not always increase, and when 
> it does, it does not increase at the same rate. It almost looks like some 
> resizing of internal cache or hashtables is taking place at certain intervals. 
>  As stated earlier, the debugger in workshop was not able to detect the leak. 
> Sun was contacted to see why this was the case. After a reviewing the problem, 
> they thought it might be related to some known problems with their compiler and 
> debugger. After installing some upgrades and patches, the memory leak was still 
> not showing up using their tools. At this point Sun believes it is a programming 
> logic problem with the xerces library. 
>  
>  This bug appears to be related to bug 8984 and 7278 which have not had any 
> change in status since they were entered back in march and early may, 
> respectively. 
>  Thank you for your assistance.
> Thomas

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://nagoya.apache.org/jira/secure/Administrators.jspa
-
If you want more information on JIRA, or have a bug to report see:
   http://www.atlassian.com/software/jira


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to