Thanks!

What is the scope of these DTM limitations? Is it per Transformer object? TransformerFactory? Xalan as a whole?

Eric


On 09/30/2014 04:52 PM, Joseph Kesselman/Watson/IBM wrote:
A DTM ID is consumed by each document tree Xalan is working with. Temporary result trees, for example, each occupy a DTM.

For reasons having to do with DTM's history and the use cases it was tuned for, there are a fixed number of bits that need to be split between selecting DTMs and selecting nodes within a DTM. If you're using many small trees, reducing the number of bits used for node selection will increase the number of trees available, at the cost of reducing the space in each DTM.

At one point, IBM modified the DTM code so documents could overflow from one DTM into another. That allowed us to push the DTM Node Bits down (allowing more documents) while still being able to handle larger documents. I don't remember whether that change was checked into Xalan or was applied only to the Xalan derivative IBM was shipping in its own JDKs, but if you're ambitious you could go digging through the code to check on that.

Not that it helps Apache, but IBM's second-generation XSLT processor (which ships with the recent IBM JDKs, and in an enhanced version as the WebSphere XML Feature), redesigned the document model code to avoid the multiplicity-versus-size tradeoff.


______________________________________
"Everything should be as simple as possible. But not simpler." -- attributed to Albert Einstein


From:   "Eric J. Schwarzenbach" <eric.schwarzenb...@wrycan.com>
To:     j-users@xalan.apache.org
Date:   09/30/2014 04:06 PM
Subject:        No more DTM IDs are available


------------------------------------------------------------------------



Hi,

We're getting the dreaded "No more DTM IDs are available" and are
looking for some pointers on how to debug this.

First of all, yes we know this is (or at least was with past JDKs)
usually an issue with people either running an old version of Xalan or
picking up the JDK's built-in branch of Xalan instead of the intended
Xalan jar, and we are positive this is not the case. We've checked
object classnames and checked the versions with
org.apache.xalan.xslt.EnvironmentCheck and are definitely using 2.7.2
(2.7.1 until today, as I wanted to make sure this newest version made no
difference).

Our application is probably putting more stress on Xalan than most, as
there is a lot of calling to Java extension functions from our XSLs some
of which return large NodeLists, and some of which even kick off
additional transforms. We've been using this application like this for
years, with pretty large data sets, and the only thing that has changed
to cause us to run into the error is some combination of the amount of
data and the particular XSLs.

So what I'm looking for are some tips on how to debug or ameliorate
this. Are there particular scenarios / operations where Xalan is
particularly likely to use (or hold onto) more DTMs than usual. Is there
any debugging we can turn on to have it output any DTM usage info that
we could perhaps use to determine where is the course of our XSL
execution DTM usage starts getting out of hand?

Worse come to worst we will dig into the source and see if we can add
logging of our own to debug this, or perhaps mess with changing
IDENT_DTM_NODE_BITS and DEFAULT_BLOCKSIZE, which some posts on this list
(from 2006!) say can help with this. But we'd rather explore other
options before going there...

Thanks,

Eric



Reply via email to