Daniel Carrera wrote:
Cyrille Moureaux wrote:
Even if, as you say (and as is true), the tag name doesn't have any
bearing on the size of the document representation in memory (because
the actual tag string is only used in conjunction with the file I/O),
each character of the tag has to be read from the hard drive and
inspected while parsing the XML file,
True, the tag has to be read from the hard disk. But then the problem is
hard disk access, not transversing the data in memmory, which is what
our friend was arguing.
The problem is that the tag isn't on the hard disk. The zipped file is
on the hard disk. The tag is in the unzipped file in RAM.
which might (keyword, might) have an impact on the time taken to do
that operation (which is if I understand correctly what we're talking
about).
The OP thought that the size of the XML tag would be a problem because
it would be slow to transverse in memmory. I tried to explain that
reducing the size of the tag was premature optimization (ie. little gain
in speed for a large increase in obfuscation).
Actually, I never said that. But it *could* be a problem, at least
indirectly, if a) there were hundreds of thousands of them and, b) the
memory in question is actually virtual memory, requiring many, many disc
reads and writes to swap things back and forth.
This is why I asked if the XML was processed in a serial fashion. If so,
then it shouldn't be necessary to hold the whole file in memory to
process it. Assuming you had enough RAM I don't know if it would make
much difference either way, but I clearly didn't have enough.
--
Rod
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]