Glen -
Yeah, I agree behemoths can drive consensual reality. I just don't think they can be innovative at the same time. The innovation comes from outside, much smaller actors. And when the innovation does come from inside a behemoth, I posit that some forensic analysis will show that it actually came from either a (headstrong/tortured) individual inside the behemoth, or from the behemoth's predation.
I suppose the definition of "innovation" is part of the point. I agree that *radical* innovation is hard by definition when tied to the momentum vector of a behemoth... even a (headstrong/tortured) individual gets damped by this. See Steve Jobs.
So, it's not clear to me we can _design_ an artificial system where calibration (tight or loose) happens against a parallax ground for truth (including peer review or mailing lists).

It seems intuitively obvious to me that such *can*, and that most of it is about *specifying* the domain... but maybe we are talking about different things?

I don't know what you're saying. 8^) Are you disagreeing with me? Are you saying that it seems obvious to you we _can_ design an artificial system which calibrates against a consensual truth?
I think I am saying we can design one that *tries to* calibrate against a consensual truth. It is not clear to me that we can design one that succeeds. That proof is in the pudding. Of course, the definition and scope (geotemporal as well as sociopolitical) of *consensual* comes into play... which may provide a logical bound to what can be done (can a sufi, a taoist, a fundamentalist LDS member, and a Doug Roberts share a consensual truth bigger than the color of the sky on a clear day? If that?).

(sidenote, Doug and Ingrun came to the house for whiskey, burned flesh and root vegetables just the other night, and he says he misses us but might stay on separate vacations anyway).

Superficially, I would agree that we can build one... after all, we already have one. But I don't think we can design one. I think such a design would either be useless _or_ self-contradictory.
I think that there may be logical bounds to this (see above) based on the nature of human nature and what "consensual reality" might mean. We can define it as "whatever a group converges on" which is OK... but probably not exactly what we are striving for?
It still seems we need an objective ground in order to measure belief error.

It think this is true by defnition. In my work in this area, we instead sought measures of belief and plausibility at the atomic level, then composing that up to aggregations. Certainly, V&V is going to require an "objective ground" but it is only "relatively objective" if that even vaguely makes sense to you?

Well, I take "relative objectivity" to mean (simply) locally true ... like, say, the temperature inside my fridge has one value and that outside my fridge has another value. But local truth usually has a reductive global truth behind it (except QM and gravity). So, I don't think "relative objectivity" really makes much sense.
"locally relevant" is even better than "locally true" I think, and I stretch "locality" beyond the geotemporal... there is a sociopolitical/psychological/spiritual/religious/??? domain in which the idea of locality is also required for this definition.
Scope and locality do make sense, though. You define a measure, which includes a domain and a co-domain. Part of consensual truth is settling on a small set of measures, despite the fact that there are other measures that would produce completely different output given the same input. So, by "objective ground", I mean _the_ truth... the theory of everything. And, to date, the only access I think we have to _the_ truth is through natural selection. I.e. If it's right, it'll survive... but just because it survived doesn't mean it was right. ;-)
I'm not holding my breath waiting for a "theory of everything". I'm pretty sure the likes of Godel Incompleteness already blew that concept right off the table... and that may be only the smallest of reasons for it. Right/Wrong are only relative to a given set of Axioms which we can (in principle) come to a consensual agreement on (the Axioms and perhaps how well a given situation aligns with them).

So far, I'm pretty happy with "do unto others" as an axiom of human intention and action and not much more. That leaves a lot of room for interpretation and may describe Genghis Khan's behavior (as moral) as easily as that of Gandhi's. The proverbial 10 commandments tend to over and underspecify and by the time you get to the entire codex of the Ibrahamic religions, it is definitely over/under specified. The I Ching doesn't represent axioms for human morality as much as a scaffolding for perception.

I gave up on the search for a GUT about the time I graduated from undergrad Physics 35 years ago... I mean I gave up believing it would be achieved, not that the search and the infinitude of approximations and new formulations would be useful and entertaining. I continue to be entertained and continue to trust there is utility (at least to drive a powerful capitalistic entertainment/military/industrial society like our own).

Carry on!
 - Steve


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to