At 10:47 PM 9/5/2007 -0700, Philippe Bossut wrote:
Hi,

Phillip J. Eby wrote:
To a certain extent, these approaches *appear* riskier because there is a definite up-front cost, and the human brain is biased towards options that lead to a sure short-term gain and a possible later loss, over options with a sure up-front cost, for a possible later gain -- even if the two options are mathematically indistinguishable in terms of net gain or loss!
Well, it's worth knowing then which mathematical function you're trying to optimize: is it long term life of the project? is it how useful it is to users?

That is indeed the question I'm asking. What *do* we want to optimize? I'm not married to any particular choice; this is an OSAF decision that needs to be made. And it's primarily a "values" decision, not a "what is ratonally best?" decision.


Note that in all scenarios, if the function (a) crosses the death threshold (no more funding whatsoever) in the process, it doesn't really matter if, asymptotically (a) and (b) are indistinguishable... So even if the long term is important, trying not to die on the way is important too (remind me of this interesting reading: http://paulgraham.com/die.html).

Yes, which is why arriving at some decision of the route to take (whichever one it might be) is of some considerable urgency.


So it tends to come down to the question of what kind of delays we'd prefer: relatively predictable up-front delays, or release-time delays of unpredictable duration?
I like predictable, the question is: how long? and how much effort?

We won't have any way of knowing that without deciding enough of the "what" to create an actual plan. However, I think it's clear that the situation is rather symmetrical: regardless of the choice we will have a relatively predictable immediate effect, and a long-term unknown. With the current direction, we will definitely be able to release new small features a lot sooner -- but the probability of successfully addressing bigger issues like system-wide performance becomes much lower.

With a different direction, we will definitely *not* be releasing new small features sooner, simply because we will temporarily not have a complete system. So, even if we are able to develop features faster, we will have an upfront delay before any of those features can actually be delivered to the users. The probability of resolving the killer issues, however, is much greater, meaning a greater probability of the application living past its funding, since it's far more likely a developer community can spring up to add "scratch their own itch" features, vs expecting them to do the internal overhauling needed to support some of the bigger issues like email. (Again, this is barring new funding directions, of course.)


No one will argue and say that a test-driven culture is a bad thing or that adequate test coverage is preposterous. The element of decision you're keeping out of your reasoning is time and amount of efforts.

That's because it's not actually the decision-making factor, nor should it be. We have a fixed schedule and fixed resources. The question then is, what do we want to have at the end of them?

That's the real question here. Once it's decided, the rest is merely a question of arranging things so that the highest-value parts are delivered first, so that when we come to the end, we have delivered the most possible value.

This type of project is actually my specialty; I'm far more comfortable with projects with an absolutely-fixed budget and deadline, as they tend to have correspondingly clearer priorities, so you know exactly what you can afford to chuck at the last minute if you have to. (In that model, for example, there's no such thing as a "feature complete" milestone where you can't actually ship the software!)

So to me, the absolutely critical question is: what are the priorities? If the highest priority is to have a system that will be adopted by the community and outlive OSAF's current funding sources, then testability moves right to the top along with it. If the highest priority is to swell the feature count, then the priorities are different, and our actions should likewise be different.


This is however the yardstick that'll get the project funded or not eventually.

Funded by whom? Do we have prospects? A business model? Revenue projections? Even if we had these things, it would merely help answer the question of what the priorities should be, i.e., what will produce the most value if we deliver it *first*.

(Also, it should be noted that if someone entered into a funding relationship for business reasons, having a codebase that isn't so tightly coupled to the existing team would of course be a plus. So an off-the-shelf tools approach would likely be more attractive to such an investor, vs. our very "proprietary" current architecture.)


I can only guess how much work what you're proposing (i.e. rewrite the whole architecture so to ensure full testability) will cost.

I'm not proposing anything, actually, except that OSAF establish a clear hierarchy of priorities for the remaining time and funds. For me, this is important because my work on libraries and design generally runs anywhere from a few months to a year ahead of the direct needs of the actual project, so that the team is (ideally) never blocked waiting for me to finish something.

If I am a kind of advance man who walks ahead of the troops to scout out the territory, then the troops have now been caught up with me for a few months, and I'm no longer useful as a scout. I can't do recon if we don't know where we're going yet, and I get nervous when the front line catches up to me before we know where we need me to go next. :)

So, my request for direction should not be confused with *proposing* a direction. I have made exploratory scouting trips to the West and the North, shall we say, so I can report a little on possibilities in these directions. For example, as Katie mentioned, Grant and I have done some exploration of an event-management model that would allow us to do the interaction/presentation and domain/persistence splits I've described in some of these emails. We've also looked at recent stats on Python ORM performance, and so on. But we need a clearer direction from OSAF before doing further recon-in-depth or "special ops" in these areas, or alternatively redirecting towards other ways to attempt to address the issues Katie has mentioned.

It's clear we must do *something* about them, and I think we have done some good exploration of how to address them under scenarios 2 and 3. But it's not entirely clear that some or all of them are solvable under scenario 1 in the time we have remaining. Nonetheless, if that's the direction OSAF goes, then of course I'm sure everyone will give it their best shot.

However, we had not really discussed the community adoption/sustainability question as much as the technical issues, and Katie's bringing it up in the email today made it clear to me that the hot button issue there will be testability. Without it, it seems unlikely that we will be able to build a developer community to support the product, or deliver the desired feature updates with sufficient predictability to support a sustainable revenue model for the desktop.

Now, that guess of mine is based on complete ignorance of current work on the revenue model, so if I am ill-informed, I would love to discover I am wrong.


One thing though gets me worry: the way you frame things, it seems that you assume that embarking on #2 or #3 will basically stall the current project (Chandler Desktop 0.7.x), meaning consuming all resources and leaving existing would be users in the cold,

I would assume that there would be a gradual transition of resources, just as has happened with previous architecture shifts (e.g. the schema API, the startups system, stamping-as-annotations, and EIM, to name the main ones I've been involved in to date). Some people are more keen to play on the edge as the details are worked out, and other people prefer to know when the ground rules are established and what the recommended best practices are, so forcing everyone to work on porting at the same time would be a bad idea anyway.

I don't see any reason why we wouldn't still have 0.7.x bugfix releases, or any reason we would abandon current users. In fact, in the long run, testability and predictability should mean *more* resources available -- and in the "off-the-shelf tools" case we have higher odds of even getting community help in a relatively short period.

However, it does mean that new *features* would likely be delayed, relative to when they'd be able to be coded on the 0.7 line.

All this having been said, I will continue to take pains to state that I'm not *advocating* this path, nor claiming that its success is certain. A year ago, I'd have advocated it unequivocally, and six months ago I was still quite enthusiastic, if becoming a bit concerned about the risks. Today, it may already be too late to make a difference to the ultimate outcome, so why stake my reputation on it? I'm only personally interested in the refactoring approach if the organization as a whole is "on board" and accepting of the risks.

It's really now just a question of how OSAF wants to define itself and its project -- what do we really want our work to mean? All of the actual paths at this point seem equally risky, so the choice is in some sense a "personal" or "identity-defining" one, rather than something that gets made on the basis of logical reasons... just like all the most important choices in life usually are. :)

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev

Reply via email to