Daniel Kasak wrote:

[EMAIL PROTECTED] wrote:

There's an interesting debate going on at ZDNet about resource usage in Oo 2.0 It would be great if you guys put in your 2 cents...

http://blogs.zdnet.com/Ou/?p=119


The author of this article went out of his way to create a spreadsheet that would perform badly under OOo. The reason it takes so long to load is that the XML for each cell has to be parsed by OOo.

Before you go off on that line, could I ask how you know he fudged the data to favor one application over the other. When I read the list it appeard that he just chose a log file that he normaly uses. It might be that this test does favor Excel, but saying he purposefully did so can back fire.


That's my thoughts on the matter. Posting comments to the website is broken, so I'd say the M$ fanboy has done this to prevent more OOo users from questioning the validity of his claims based on this extreme test case. Obviously most documents open quite fast, and he would know this, but chose to present a case where OOo was quite slow, and made this the focus of the article.

I have to point out that the one thing I did not see on ZDNet thread was anyone posting up numbers from their own test results. The truth is OO is a good package, but performance is not a strong suit, least not on my WinXP machine. I tend to think this is more a function of the compatablility layers - and after watching a presentation from the last OOConn it may also be a little deeper then this. (I would give the OO team high points for having this discussion in public, and I think it shows that the problem is being taken seriously, as it should)

For my purposes I have been really hitting the Base module hard and one thing I have noticed is that when I take data from a Base table and link into a Calc sheet the amount of time for the transfer appears to be pretty bad. I haven't created precise metrics for this, so it is just perception at this point. But if I am not careful about limiting the number of rows then it can take minutes not seconds for the transfer to happen. I would imagine this is more to do with memory management then XML parsing, as there is no XML to parse as far as I know.

Pretty dodgy stuff, but it's Microsoft we're dealing with here.


I would not call it dodgy, at my last corporate position our customer base would send us files of many 10s of thousand of records - we offered an analysis service as part of the process of cutting them over from their old backend systems to ours. The same was done annualy for as long as they used our system, so that we could generate statistics (from all installed systems) to feed back into the system for forcasting purposes. It is not uncommon in my experience therefor to work with the size files he was talking about. It may not be an every day affair - but it is certainly not unheard of, and when it comes up there is no easy way to get around it - either the tools you have available can handle it reasonably well or they choke.

Andrew Jensen

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to