Ian Lynch wrote:
No, you are persevering on an untenable argument because you lack the technical knowledge to back it up. The people with the technical knowledge know you are mistaken but do not necessarily know the exact reason for your performance problem but they do know its not the thing you think it is. When in a hole, stop digging ;-)
Ian, I haven't gotten *any* information, except, "Whatever it is, it isn't that, you silly poop". Would you be satisfied with that?
I repeat, I am *not* making any ****ing assertion! I asked a question; a not unreasonable question. If the size of the file is 11 times bigger doesn't it make some sense that that would take longer to wade through?
If the tags surrounding each cell were 800 bytes or 8000 bytes, would you still call it a silly question? At what point does it become silly? 500 bytes... 250 bytes... 100 bytes?
What's not in dispute is that when I saved or loaded the file, the disc sat and thrashed for 30 minutes. That tells me that whatever its doing it's having to swap stuff back and forth from vm. That didn't happen when working with the xls or the csv.
Now considering that whichever file is loaded, you end up with the same data structures in working memory, it's reasonable to ask, what's going on in the middle?
And it's not unreasonable to speculate that having a 45 MB file loaded into memory, when you don't have a lot of headroom to start with -- that's why I bought more RAM recently -- could knock you over the edge into vm swap. It's a speculation, not an assertion.
This thread would have been a lot shorter if Daniel had said, "That might be an issue in marginal cases where you run short of RAM, but in general the extra processing comes into play because the XML file has to be processed to create a representation of the tree in memory, which then has to be mapped to the internal DOM objects. So there's an extra step and at any given time you have essentially two representations of the file in memory until the DOM is contructed and you can discard the intermediate stuff." Note that I haven't gotten enough information to actually make that statement and it may be completely wrong; it's still speculation based on the little tidbits that get thrown my way.
But instead I get (and I paraphrase): "What a stupid thing to think, why would you think that?" followed with vague comments about XML being slow to parse.
I give up; I guess I'll never find out. All I know is that Calc sucks for large spreadsheets, I'll just never know why.
-- Rod --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
