[udk-dev] Thinking about an API deprecation process
Hi *, on the last ESC meeting, we had a little brainstorming about if and how to deprecate OOo API. The 'if' was unanimously agreed upon, for the 'how' we came up with the following thoughts: API deprecation === See http://wiki.services.openoffice.org/w/images/2/2a/Esc-mar-2009-api-deprecation.odp for the kickoff slides -- What we need to do -- Decide on preconditions for change: - API was badly designed (architects/pleads to vote if not concordant) Have a list of 'design smells' here, e.g.: * missing exception specifications - API is unused * implemented but unused (can only be easily verified inside OOo code repo, with some more effort inside extension repo - is that enough?) * not implemented (maybe transitively, i.e. listener interfaces, which are meant for API clients, but don't have code to call them inside OOo) - API implementation is too expensive (referring to both effort performance) (architects/pleads to vote if not concordant) What we mean here is e.g. (hypothetical): * profiling xml import has shown that css::xml::sax::XEntityResolver is horribly inefficient and needs a third argument * after the drawing layer rework, one of the css::drawing interfaces needs an inordinate amount of code to emulate old semantics Decide on constraints: - how many clients does this API have * inside OOo code * (estimated) use outside OOo repo * (estimated) number of implementers not reading interface-announce For the latter two, if (at most) recompile is enough, any number of implementers won't block change. For the latter two, if syntactic changes are required, have architects/pleads majority in favor of change? - how 'bad' is the API really – if bad enough, change anyway? Process of Change - when would change be permitted - every feature release, or only major releases? - deprecate API in advance - one or two features releases before the actual removal. Of course, a replacement needs to be available then? - can/should we add technical barriers/support for detecting stale API usage, i.e. refuse to run such extensions? Should we add technical means to warn devs when using deprecated API (either enabled in debug builds, or in a special logging mode of OOo)? Who decides? - we've referred to the entity finally deciding as architects/pleads here; please consider that a place holder. We'd like to hear sensible proposals here also for that committee, also simply voting on the relevant project mailing list is conceivable, or just having the respective project lead decide. Looking forward to your ideas, Kay, Frank Thorsten - To unsubscribe, e-mail: dev-unsubscr...@udk.openoffice.org For additional commands, e-mail: dev-h...@udk.openoffice.org
Re: [udk-dev] unable to find uno runtime environment for windows
Emmanuel is not subscribed, Cc-ing. Joachim Lingner wrote: I have no idea what you mean with 'uno runtime environment exe for windows', DCOM component etc. There is, however, no particular type library available. OOo can only be accessed via IDispatch. Joachim emmanuel.vanc...@free.fr wrote: Hi, I don't know why, but I am not able to find the uno runtime environment exe for windows. I installed the uno sdk, but it doesn't seems that it installed the DCOM component, which I really need. Is there a way to install DCOM component via the sdk ? If not, where can I find the URE exe for windows ? Thanks in advance. Emmanuel Vancour - To unsubscribe, e-mail: dev-unsubscr...@udk.openoffice.org For additional commands, e-mail: dev-h...@udk.openoffice.org - To unsubscribe, e-mail: dev-unsubscr...@udk.openoffice.org For additional commands, e-mail: dev-h...@udk.openoffice.org Cheers, -- Thorsten - To unsubscribe, e-mail: dev-unsubscr...@udk.openoffice.org For additional commands, e-mail: dev-h...@udk.openoffice.org
Re: [udk-dev] Re: My first Calc addin: problem with exception handling
On Mon, Aug 25, 2008 at 02:49:56PM +0100, Andrea wrote: I've tried to force a SIGSEGV, but even then, gdb stops but there is no call stack available. No idea... Try another (newer) gdb version, if not done already. gdb had a tendency to break on c++... Cheers, -- Thorsten - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [udk-dev] evolving API
Mathias Bauer [EMAIL PROTECTED] writes: But that's the only safe way to do this, for them. BTW, telling from past experience here: even if API stays compatible syntactically, there's absolutely no guarantee that the semantics haven't changed, and the extensions stops working anyway. Come on, then why create APIs at all? For the same reasons you could say that no program is usable because it may have bugs. And indeed what you describe is just this: a bug. Hi Mathias, no, it's not necessarily a bug. Compatibility circles around API contracts, which is (much) more than just method syntax. You can change function behaviour, even rightly so, e.g. to fix an outright bug, or when implementation differs from documented behaviour, and break an extension that relies on the old behaviour. And no, I even don't think this is bound to happen much less frequently than bugs caused by syntactical incompatibilities. I don't see much value in this UNO API compatibility discussion, unless we deal with this end-to-end. I agree that we shouldn't rush things. I'm not convinced that we have thought hard enough to get the flexibility we (the developers) want without letting our users suffer from the consequences. If we are sure that we must nail extensions to major releases to get the necessary flexibility so be it. But we should try hard to avoid that. I'm talking about _every_ release. To reiterate: we cannot rely on _any_ extension (except the most trivial ones, or those that don't use API at all) running correctly with a new release, without somehow assuring that API contracts haven't changed. We can achieve this by either QAing all known extensions against the new release, or by very strict API testing, or by effectively forbidding any changes to code paths reachable from API calls. Mozilla does the former, and the approach appears to be the most feasible to me. What we are doing now has worked for COM developers for a long time. It might not be the most comfortable thing for the developers but IMHO we should try to achieve comfort first for the users, developers second. About COM: you still can take StarOffice 3.0 from 1995 and embed a Writer 3.0 object into an Excel 2003 container. Of course nobody will do that for anything than testing and combining apps with more than 12 years difference in age is an exaggeration. But I hope you get the message. Did you try? IIRC, there _have_ been subtle differences that we'd have to cater for, over the years. And I can counter with various other COM examples, e.g. in the DirectX area, where interfaces stayed, for sure, but ~nothing works unless one uses the very latest IDirect3DFoo21 interface. So, I consider your example just that - an example that in a defined area, a stable contract might be maintainable for an extended period of time. But nobody would supposedly be able (nor willing) to assert that for all of our 3000+ interfaces... So it *is* possible to keep compatibility without making an API unusable. Agreed. Perhaps not always, but that has to be proven. Unless you're not willing to take the implementation with you forever, this is easy to falsify. And that's the whole point Frank and others have been making: we've already 7+ million LOC, and it's already a heavy burden to maintain that. Compatibility is not a holy grail for me - it's a tradeoff. And honestly, I'd currently rather spend development resources on other stuff, than to workaround API idiosyncrasies, or changing implementations to also handle XWindow5... That's not exactly the Mozilla approach. Extensions usually are declared as compatible for some releases, mostly until the next major (AFAIK the project explicitly urges them not to declare their extension compatible for the next major). This is wild guessing also. There's no QA involved as you can't QA what has not been released. Only a few extensions are marked comatible only for the current release. OK, didn't know that. Regarding QA: that misses the point. You or the project can QA your extension against the release, and then flag it as compatible. Regardless of whether you're the maintainer or not. Cheers, -- Thorsten - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [udk-dev] evolving API
Mathias Bauer [EMAIL PROTECTED] writes: What I especially wouldn't like is the following situation: I have written an extension that runs fine in, say, OOo 2.x. I also didn't use any types that have been changed incompatibly in OOo3.0 but as this release is announced to be API-incompatible my extension is not usable reliably. How can we avoid that? How can an extension find out whether it is still compatible to the current OOo version? Hi Mathias, there's a relatively recent thread about this topic in [EMAIL PROTECTED] I hate how it is done in TBird and FFox where extensions are disabled just because the version number changed but still run fine after some manual fiddling in the version information of the extension (something end users shouldn't need to do). But that's the only safe way to do this, for them. BTW, telling from past experience here: even if API stays compatible syntactically, there's absolutely no guarantee that the semantics haven't changed, and the extensions stops working anyway. I don't see much value in this UNO API compatibility discussion, unless we deal with this end-to-end. An acceptable level of future-proofness can IMO only be achieved by either QAing against all extensions (and disabling what hasn't - that's the Mozilla approach), or running extensions in a sandbox (out-of-process might be good enough here). Cheers, -- Thorsten - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [udk-dev] cpp uno: Enhancement for uno::Reference
Stephan Bergmann [EMAIL PROTECTED] writes: I saw your patch yesterday and only had a quick glance at it. What I do not understand is whether those few ambiguities/incomplete types across OOo are related to the Reference change (i.e., whether the Reference change introduces an incompatibility that has to be taken care of at those various places across OOo---that would be a clear no no). Hi Stephan, no, this ain't an incompatibility in the sense of the word, although without the changes, compilation would break. Basically, removing operator const Reference XInterface unearthes various places, where the source type (that of the referenced interface being implicitely casted) is incomplete, or where the conversion to XInterface is ambiguous (people who do multiple inheritance). The former borders to a bug, the latter is inconvenient - although I suspect that the underlying reason for people passing around generic XInterfaces (and later querying for the actually needed one again) is at least partially related to uno::Reference having _only that one_ single automatic conversion in the past. Cheers, -- Thorsten - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [udk-dev] cpp uno: Enhancement for uno::Reference
Stephan Bergmann [EMAIL PROTECTED] writes: Third, the implicit option you proposed where you can implicitly use xDerived for the up-cast. That the third most closely mimics plain pointers does not automatically qualify it as the best solution. Too much implicit conversion can make code too obscure and fragile, as we probably all have learned the hard way. This statement is as true in general as is doesn't apply to this case. The only thing the proposed change will facilitate is the implicit upcast a naked ptr-to-interface has offered since ages. All of us should be used to that behaviour, have employed it countless times, and found it convenient (I for myself did, at least). This is in no way comparable to e.g. the automatic conversions of ints to strings, where semantics really get obscured. Cheers, -- Thorsten If you're not failing some of the time, you're not trying hard enough. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [udk-dev] cpp uno: Enhancement for uno::Reference
Stephan writes: I vaguely remember having discussed this before with Daniel Bölzle, but neither of us can remember whether there were any serious problems with it. Well, will of course try this on a full build ;-) Whether or not the constructor should be explicit might be a question of style, however. Don't think so. To match behaviour of plain ptrs (for the implicit conversion), the copy constructor needs to be non-explicit. After all, having that taking place automatically is my whole point here. Frank writes: As much as I appreciate convenience while writing code, I never felt too bother with using the .get()-workaround. Granted. My point was that the workaround is non-obvious, to the best of my knowledge nowhere documented, and it breaks the abstraction of uno::Reference being a ptr-like object. On the other hand, I somewhat fear the zillion lines of output probably emitted by the compiler when it comes to an assigment xFoo = xBar where XBar is *not* a base class of XFoo, but the compiler nonetheless tries to match the several hundred potential assignment instantiations. Which makes me reluctant to this change ... Well, that's only 'your' Windows compiler stupidness - and it does that nevertheless, with that stdexcept spillage... ;-) No, seriously, in this case there's no trying-out of template instantiations. The types of xFoo and xBar completely and sufficiently determine the member template instantiation - the place that fails is the ptr assignment inside that template. So, if there are no hard objections, I'd implement the change, do a full build on my favorite platform, and report back with a complete patch here. Cheers, -- Thorsten If you're not failing some of the time, you're not trying hard enough. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[udk-dev] Merits of spirit yacc (was: Thoughts on String Construction / Destruction Performance)
Kay Ramme - Sun Germany - Hamburg [EMAIL PROTECTED] writes: At least in theory an unlimited (in the sense of programming language constructs etc.) parser generator, as yacc, should be better than a limited one, as boost::spirit. In which way do you think spirit is limited? The semantic actions available there could implement just about every conceivable context-sensitive grammar. Reaching the point where the parser becomes the main bottleneck, we should try out independent implementations, using boost::spirit and bison/yacc. Personally, I am a fan of domain oriented prog. languages and therefor would favor bison. According to other people, spirit _is_ a DSEL (domain-specific embedded language) - when given the choice, I prefer embedded DSLs over external ones. :-) Cheers, -- Thorsten If you're not failing some of the time, you're not trying hard enough. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [udk-dev] Merits of spirit yacc
Kay Ramme - Sun Germany - Hamburg [EMAIL PROTECTED] writes: Thorsten Behrens wrote: According to other people, spirit _is_ a DSEL (domain-specific embedded language) - when given the choice, I prefer embedded DSLs over external ones. :-) Ohhh, from what I understood from others, I thought you 'd have to construct a parser by C++ statements (using a kind of library). Yes, your initial thought is correct - it's an _embedded_ DSL. Look at slideshow/source/engine/smilfunctionparser.cxx around line 454 to get an impression how the grammar looks like (BTW: the messy semantic actions there could be expressed _much_ more elegantly with an 1.33er spirit). -- Thorsten If you're not failing some of the time, you're not trying hard enough. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [udk-dev] Thoughts on String Construction / Destruction Performance
Eike Rathke [EMAIL PROTECTED] writes: A specialized parser could almost certainly be faster than the general SAX parser passing strings back and forth. I wouldn't do it with lex/yacc though, they're a nightmare to maintain, and in case wrong code was generated, which can happen, you're almost lost. I'd prefer boost::spirit instead, but it might be even more work to implement the parser. I've no idea though whether boost::spirit would be suitable to parse an ODF tree. hm. I'd profile a larger test case beforehand - spirit is a recursive parser vs. yacc being table-driven. But OTOH, maybe contemporary optimizers are able to compensate for that. And I'd definitely bump our boost to 1.33, then - spirit (and lots of other stuff) has been improved a lot. -- Thorsten If you're not failing some of the time, you're not trying hard enough. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]