I really support this conversation.  For my part, historically, JavaSpaces or 
Tuple Spaces is the real Gelernter and Carriero  vision and JINI (River) is 
secondary and serves that purpose.  
http://www.edge.org/3rd_culture/gelernter/gelernter_index.html

MG


On Feb 10, 2011, at 9:46 AM, Dan Creswell wrote:

> Uhm, one more follow-up and I'll shut-up....
> 
> Another way to characterise the choice we're talking about....
> 
> If you want Jini to have mass appeal given the current technical enterprise
> environment you have two basic choices:
> 
> (1) Change Jini sufficiently to fit with the environmental norms: XML
> configuration, Spring Integration, expensive hardware-based clustering
> solutions, maven, static configuration (dynamic lookup doesn't fit the
> mindset) etc.
> 
> (2) Educate those in the environment sufficiently that they'll adopt Jini. A
> large education task and no mistake. In the meantime, as they learn, expect
> plenty of "I want clustered this" or "Dynamic lookup is too complicated" or
> "I don't want to do moveable code" (oh the irony that Javascript and
> Browsers are like, movable code).
> 
> (2) requires sticking to your guns, taking off some rough edges but staying
> true to "collections of unreliable components to build reliable systems" and
> accepting there may never be mass adoption.
> 
> Note this isn't about technical superiority, it's about mindset and
> education. Google GFS isn't particularly clever or technically brilliant, it
> like Amazon Dynamo and a bunch of other stuff is just a case of selecting
> the right tradeoffs for the task (GFS has for a long-time had a single
> master but a decent replication/recovery strategy). Thus I contend that it's
> not about whether or not developers are capable it's that they think
> differently. There's limited consideration of tradeoff and more a follow the
> cookie cutter 3-tier, reliable hardware, framework, database approach to
> delivering some set of functions where performance, other non-functionals
> and operational aspects (e.g. monitoring/stats-gathering) are largely an
> after thought.
> 
> 
> 
> On 10 February 2011 17:10, Dan Creswell <[email protected]> wrote:
> 
>> "Many times better hardware is a better choice."
>> 
>> That depends on how you weight the pros and cons of course....
>> 
>> Many enterprises go with "better hardware as a better choice" and
>> ultimately still suffer horrendous complexity, uncontrollable/difficult to
>> manage failures, high operational costs, cheaper developers (ironically they
>> produce less stuff and it's more likely to be anything other than what the
>> customer desires) etc.
>> 
>> Some enterprises go with cheap hardware, "guesses" as you call them, or
>> indeed tradeoffs of CAP theorem as I prefer to call them. They tend to have
>> fewer and better developers and potential (not always realised) for
>> "operational sanity" (well understood, simple processes for failure
>> handling, no special hardware arrangements etc).
>> 
>> "Better choice" then, for me at least, is more like "more suited to the
>> average enterprise environment where there isn't particularly good
>> understanding of the consequences of that choice and little genuine care for
>> decent product". At the risk of offending people it's the equivalent of
>> putting Windows on a PC and tolerating the endless reboots etc. I prefer OS
>> X or some other variety of UNIX.
>> 
>> An example:
>> 
>> Everyone loves Oracle, they build clusters of high quality hardware, have
>> asynchronous replication to a DR site and write their software as if the
>> database is always present. They often place all their code on top of a
>> single database. The day comes when that database fails (dies, exhibits a
>> bug, network fails, performance goes through the floot, yes these things all
>> happen and more often than people would like to believe) and the entire
>> business stops. Oh they recover eventually, probably lose some transactions
>> along the way (because they didn't understand what asynchronous replication
>> does), suffer a myriad of deployment and recommissioning issues but they
>> stagger on complaining as they go and burning huge quantities of cash just
>> to keep themselves alive.
>> 
>> This is a whole big topic, I could go on for hours, I'm sure others can too
>> and I'm sure we'll have many differing opinions but the overall challenge is
>> the same:
>> 
>> (1) What kind of systems do we want to build with Jini?
>> (2) What kind of users do we want using Jini?
>> (3) What kinds of stuff do we need to build given (1) and (2).
>> 
>> I have a particular (obvious?) bias, I believe Jini might satisfy that
>> desire equally I could imagine lots of people would like to make Jini the
>> same old, same old that Oracle, SpringSource and such provide. The further
>> we are from the former and the closer we are to the latter, the less I'll
>> have any interest because it's all been done before.
>> 
>> 
>> On 10 February 2011 16:49, Gregg Wonderly <[email protected]> wrote:
>> 
>>> In a recent project, I built a distributed postgres database using
>>> transactions and a rather interesting InvocationHandler implementation that
>>> allows a mesh network to exist between all participants so that everyone
>>> sees every change.   From a participants perspective, there are zero or more
>>> client displays that show and dynamically update the display of data, there
>>> are database host, and there are external servers that use the API calls to
>>> change incore memory versions of the persistent data.
>>> 
>>> The transaction rate, because all participants are on a local network, is
>>> quite performant.  But, there are some issues with how the transaction
>>> manager failures work out (including some of the bugs Patricia has found
>>> that we had not managed to see the cause of) that make it a bit fragile for
>>> continued use.
>>> 
>>> I'd personally have a great desire to have TransactionManager be a focus
>>> of some effort to try and finish getting its behavior to be dependable and
>>> consistent for a single process service.
>>> 
>>> When you go to have a distributed view of the same data across multiple
>>> systems, you get all the problems of partial failure being an impediment to
>>> successful operations on each transaction.   Dan Creswell and I have had
>>> many a discussion about how it is often easier to build a "better piece of
>>> hardware" than to "distribute a software system".   Many times better
>>> hardware is a better choice.
>>> 
>>> When you look at Hadoop and Googles use of "cheap hardware", you can see
>>> how the line can be drawn in the sand at some point to just provide limited
>>> functionality and use "guesses" to move forward.
>>> 
>>> Gregg Wonderly
>>> 
>>> 
>>> On 2/9/2011 5:28 PM, Jeff Ramsdale wrote:
>>> 
>>>> +1. The lack of partitioning and fault-tolerance is exactly what's
>>>> keeping my current employer from using Outrigger, though they'd love
>>>> to. They do use Jini, though, so they'd be an easy sell if such a
>>>> thing were available.
>>>> 
>>>> -jeff
>>>> 
>>>> On Wed, Feb 9, 2011 at 3:13 PM, Patricia Shanahan<[email protected]>  wrote:
>>>> 
>>>> What do others think of this general idea, as a development direction
>>>>> for
>>>>> River?
>>>>> 
>>>> 
>>> 
>> 

Michael McGrady
Chief Architect
Topia Technology, Inc.
Cel 1.253.720.3365
Work 1.253.572.9712 extension 2037
[email protected]



Reply via email to