> Or would one have to write his own indexing?

Depends on what you mean by "write his own indexing".  There is no way a 
generic indexing tool would know what from the XML schema you wanted to 
index - so, at minimum, you would have to tell any tool what fields you 
wanted indexed.
------------------

I really did not mean it the way it sounds, I don't want to write my own Indexing 
system, I was more refering to the idea behind indexing, i.e. one large repository 
that is searched, which is what I was talking about by combining all XML docs in one. 
;-)

------------------
> And then do an XPath on this file, but its still not optimal I reckon..

Yep - that was my other suggestion.  Once again "optimal" really depends 
on what the user will accept.  XPath is *very* clever and you would be 
surprised at how quickly it works.  Remember, it searches a document that 
has been broken down into DOM objects.  So it isn't parsing the XML as it 
searches.  And given that your searchable values are actually part of the 
XML structure (and not just attributes) this is optimal for the type of 
data.
------------------

I know XPath itself is quite fast, but I was more refering to the system having to 
read the whole file into memory before XPath can read it.

Taco

---
You are currently subscribed to cfaussie as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]

MXDU2004 + Macromedia DevCon AsiaPac + Sydney, Australia
http://www.mxdu.com/ + 24-25 February, 2004

Reply via email to