Recently, I was researching a way to get rid of an XML document we use in our Web based app for state communication. The hope was that JSON/GWT RPC would crush out all the tag and attribute noise we had in the document.
For smaller sets of data, this seemed to be true, though the savings was only 30% reduction and the time cost on the client side kind of made that iffy.. it was smaller. However, when I ramped up the testing from a 3K XML document to a 34K one... ouch. The content-length of the posted data inflated to 54K? Basically I have a simple set of DTOs, one is an xml node that has name and text fields, an array or 0 or more node children, and a reference to attributes. Our XML follows a pattern and this pattern seemed most simply captured in this structure. Now, I have tried hashtables for the attributes and the size seemed to be a little more for that. I also tried a List for the child nodes.. no improvement. (FYI - Of course, I did the gwt.argType annotations). The HEX dump using Fiddler shows a lot of encoding, the content header seems to indicate utf-8 encoding. Now, I know in some cases UTF-8 can inflate string length (if I recall for non-ASCII chars). however, this is all ASCII in this test. A co-worker looked at the hex stream and suggested some binary data is being encoded using Base64. Hmmm? What binary data? All the content of my DTOs is String or it is null. Is there some explanation of this phenomena I can read? My searches so far turn up nothing. It just seems counter intuitive that a chatty XML file is more efficient than a lean JSON DTO. - John --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Google Web Toolkit" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/Google-Web-Toolkit?hl=en -~----------~----~----~----~------~----~------~--~---
