Hi everybody,
We are using Xalan 2.1.0 & Xerces 1.4.0 on Solaris 2.6
(the box is a e420R, 4CPUs, 4GB RAM). For development,
we use a mix of Intel boxes running Linux and/or NT.
The exception we get is this:
java.lang.ArrayIndexOutOfBoundsException
at org.apache.xerces.utils.UTF8DataChunk.toString(UTF8DataChunk.java:148)
at org.apache.xerces.utils.UTF8DataChunk.addSymbol(UTF8DataChunk.java:405)
at org.apache.xerces.utils.UTF8DataChunk.addSymbol(UTF8DataChunk.java:390)
at org.apache.xerces.readers.UTF8Reader.addSymbol(UTF8Reader.java:124)
at org.apache.xerces.readers.UTF8Reader.scanQName(UTF8Reader.java:1406)
at
org.apache.xerces.framework.XMLDocumentScanner.scanAttributeName(XMLDocumentScanner.java:2141)
at
org.apache.xerces.framework.XMLDocumentScanner.scanElement(XMLDocumentScanner.java:1807)
at
org.apache.xerces.framework.XMLDocumentScanner$ContentDispatcher.dispatch(XMLDocumentScanner.java:1238)
at
org.apache.xerces.framework.XMLDocumentScanner.parseSome(XMLDocumentScanner.java:381)
at org.apache.xerces.framework.XMLParser.parse(XMLParser.java:1035)
at org.apache.xalan.processor.ProcessorInclude.parse(ProcessorInclude.java:303)
at
org.apache.xalan.processor.ProcessorInclude.startElement(ProcessorInclude.java:189)
at
org.apache.xalan.processor.StylesheetHandler.startElement(StylesheetHandler.java:631)
at org.apache.xerces.parsers.SAXParser.startElement(SAXParser.java:1376)
at
org.apache.xerces.validators.common.XMLValidator.callStartElement(XMLValidator.java:1197)
at
org.apache.xerces.framework.XMLDocumentScanner.scanElement(XMLDocumentScanner.java:1862)
at
org.apache.xerces.framework.XMLDocumentScanner$ContentDispatcher.dispatch(XMLDocumentScanner.java:1238)
at
org.apache.xerces.framework.XMLDocumentScanner.parseSome(XMLDocumentScanner.java:381)
at org.apache.xerces.framework.XMLParser.parse(XMLParser.java:1035)
at
org.apache.xalan.processor.TransformerFactoryImpl.newTemplates(TransformerFactoryImpl.java:864)
Note that we do not see the problem on any of the development boxes,
even though we run the _exact_ same binary code, and the data we process
is _exactly_ the same, byte for byte. There are two probable causes:
-- the endianess of the machine matters (bug in the VM)
-- there is some sort of race-condition that we can hit more easily
(because we run with 4CPUs, rather then with 1)
After further investigation, we think it's a race condition. Let me explain why.
Here is the piece of code that barfs:
138: public String toString(int offset, int length) {
139:
140: synchronized (fgTempBufferLock) {
141: int outOffset = 0;
142: UTF8DataChunk dataChunk = this;
143: int endOffset = offset + length;
144: int index = offset & CHUNK_MASK;
145: byte[] data = fData;
146: boolean skiplf = false;
147: while (offset < endOffset) {
148: int b0 = data[index++] & 0xff;
We tried to print the offending index, and indeed, we got 16384
(which is invalid, since the _size_ of the array is 16384).
However, when we did that, we first inserted this statement:
System.out.println("UTF8DataChunk:offset=" + offset + " length=" + length + "
endOffset=" + endOffset + " index=" + index);
between line 147 and 148. And the problem went away! So, to have as
little effect on timing as possible, we wrapped it in an if
(testing for invalid values):
if (index < 0 || index >= data.length) {
System.out.println("UTF8DataChunk:offset=" + offset + " length=" + length
+ " endOffset=" + endOffset + " index=" + index);
}
And so we got to the invalid index. Moreover, if we stick this statement:
if ( (index % 100000 ) == -1 ) {
try {
Thread.sleep(1);
} catch (InterruptedException e){
System.out.println("UTF8DataChunk:couldn't sleep:offset=" + offset + "
endOffset=" + endOffset + " index=" + index);
}
}
which can never execute (since index % 100000 > 0), we made the
problem do away. And thus we concluded that it must be a race
condition.
I should also note that we thought we found a workaround for the
problem: use Readers instead of streams. That seemed to avoid it,
but this morning it came back to bite us (despite our reader trick).
Any help with this problem would be greatly appreciated.
--
Dimi.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]