Stewart Stremler wrote:
So you store it in psuedo-XML or something?
Define store. In memory on on-disk?
In memory, the structure is called an R-Tree. It needs to be very
efficient for spatial queries.
On disk, you tend to want to write the polygons out as a flat structure
with linkage rather than hierarchy. This allows you to do efficient
bulk loading of the R-Tree if it was written out sorted by X and Y
coordinate but still winds up with a correct (if not terribly efficient
R-tree structure).
Most people who
design a format invariably create a parser with specific assumptions
because "it's easier". Later, they can't change that because "we have
all this existing data".
As if they wouldn't do that in XML?
Normally, no. DOM can vacuum up unknown tags even if the end user
doesn't look at them. SAX works fine as long as the program doesn't
blow chunks when it gets an unknown event and just keeps reading tokens
until the end match.
I actually quite like the PullDom idioms for this. Use SAX to blast
through the elements which encompass large amounts of structure, but use
DOM on small arenas. ie.
<layout>
<rect>
<name>lna_plus</name>
<llpoint><x>0</x><y>0</y></llpoint>
<urpoint><x>100</x><y>100</y></urpoint>
</rect>
</layout>
SAX would be used to recognize <layout> and <rect>, but DOM would be
used to pull in the detailed structure of <rect>.
Sounds like it's a vendor problem. No need to inflict XML on the
rest of the world for *that*. Go beat up your vendors with a stick...
It didn't work with Microsoft when a whole lot more people were
clamoring for exactly that. Why should it work on EDA companies?
And, um, actually that's part of the benefit of XML. XML *is* the stick
we can use to beat up vendors.
-a
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg