Re: [vos-d] Van Jacobson: named data

2007-05-09 Thread Len Bullard
You put your finger on the major issue:  cost.  The energy budget is a part
of the noise factor of any communications network, artificial or organized.
The web is predated by better designs with regards to noise and is a bit of
a botch with respect to quality in terms of how it has been marketed.  No
surprise there because quite a bit of the technological infrastructure in
use today is marketed that way with the marketing goals dominating the
feedback to the design goals.  The orthogonal pressure of investing schemes
such as hedge funds, private equity, venture capital etc. drive the quality
down further because the squeeze for numbers if not met by innovation come
out of the employees and other aspects of corporate management. 

Predictions that the network models such as the web would lead to this were
made prior to the web tsunami not because of the web in particular although
it is a particularly bad case, but because the laws of second order
cybernetics and complexity indicate this will be the case.

I use examples such as the questions on the blog simply to point out that
even for what some would consider cultural memory with a very high
penetration of exposure for some spatio-temporal event, the distortion
effect of a high intensity noisy signal over a much shorter time at a higher
bandwidth is sufficient to degrade the reliability of any copy.

The cost of purchasing a vetted copy of the series, watching the first
episode (sufficient to answer all of the questions correctly therefore to
pass a single test of the reliability of the source) is quite minimal.  The
cost to correct the damage across the culture is not.  So while the value of
that particular corpus may not significant, it is easily demonstrated the
modulation of frequency and amplitude for a signal of high impact is
sufficient to distort a high value decision.   Again, the first three pages
of Shannon's seminal work provides the basic model of selectors (decision
trees).  To apply the model socially, behavioral science is a sufficient
model.  To figure these into a hypermedia system design that can adapt to
distortion or to create distortion, a second order cybernetic model is a
good start plus some study into signal filtering models.

The web isn't actually designed to be dirty.  It designed not to care if it
is dirty or clean.  It is a minimalist contract much the way a virus is a
minimalist interface for propagation without regard to host degradation.

The web doesn't care.  You have to.  That's the deal.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Lars O. Grobe
Sent: Wednesday, May 09, 2007 2:39 AM
To: VOS Discussion
Subject: Re: [vos-d] Van Jacobson: named data

 In effect, regardless of the wrapper, unless you have the original 1959
 first episode of Rocky and Bullwinkle, you probably can't answer those
 trivia question correctly.

There are some approaches to organize these decentralized verification 
processes in the field of certificates. E.g. cacert.org, where you need 
a certain credit to sign and need a certain number of signers and 
documents verified. Maybe one could think about something like that for 
digital content. If I get the episode from 1959 as digital with the 
signature from a public library, I might trust it. If not, there might 
be a second one around, signed by another library, and if both are 
identical I might trust. Or one copy signed by two libraries. The 
question is if those libraries will spend the money on people verifying 
their digitized content, as this is not to be automated. And most of 
them suffer already from the efforts necessary to digitize content 
without proof-reading... Maybe the cheap, quick and dirty character of 
the web is part of its very nature, with proofed and verified content 
existing only on some small expensive islands... ;-) Lars.


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Van Jacobson: named data -- revision control

2007-05-09 Thread Ken Taylor
Lalo Martins wrote:
 On Wed, 09 May 2007 09:07:57 +0200, Karsten Otto wrote:
  I don't quite understand what you need versioning for. The bulk of
  changes you get in a shared word is avatar movement, which may wind up
  to ~30 changes per second per avatar. Do you really want to keep a
  record of all this? My understanding was that if you want to make a
  movie, its your responsibility to do the recording (and filling your
  hard disk), not that of the world/server.

 IMO, much of the interesting content may not actually be 3d objects,
 but rather, information (text, models, images, sound, the works) which
 you use the 3d objects to navigate trough and interact with.

 Anyway, in my mental model, the answer to that is horizons.  For a
 document (akin to a wikipage), you may define an infinite horizon: all
 revisions will be kept.  For most 3d content, the default horizon, which
 is to be defined -- a month, a week, 200 revisions?  And for things like
 position of avatars, zero: no revision control at all.

 Alternatively, only record a revision when the object remains unchanged
 for some period of time... but this is hairy to implement and I'm not
 sure it would be worth it.


You could approximate this by taking revision snapshots at specified
intervals in time, as long as the object has changed at least once in that
interval (or at least n times, maybe). That way you may not get every
revision, but you could at least have one every hour or every day. You'd
want this for complex objects that change often, but for which not every
change is necessarily critical to remember.

Another use for that might be recording animations -- store object movement
10 times a second or something. Though animation has a slightly different
meaning than revision history -- you can have several different
animations stored for the same group of objects, and only care about the
relative ordering and timing within each animation, whereas revision history
generally has only one thread with absolute ordering and timing.

So animation could be generalization of revision history -- an animation is
a series of object states ordered and timestamped (with a relative or
absolute time) -- whereas a revision history is one particular series of
object states with absolute timestamps. An object can have several
animations associated with it, but only *one* revision history. The revision
history can even store version changes for other animations in the object,
but you wouldn't want it to store version changes for itself :) Addressing a
previous version could be something like
vip:://hostsite/my/revisioned/vobject/version/5 -- where version is a
reserved contextual name that specifies an object's revision history (so
maybe call it core:version? or history:version?) Animations would be
similar: vip:://hostsite/my/animated/vobject/happyfunanimation/6 -- but have
any name you want. Then of course you could have
vip:://hostsite/my/animated/vobject/version/2/happyfunanimation/8 if the
animated object is also revisioned :)

(Actually, getting to the object might be more like
vip:://hostsite/my/revisioned/vobject/version/5/object -- this way you can
store metadata at the revision node, such as
vip:://hostsite/my/revisioned/vobject/version/5/timestamp. And you can store
data at the animation node, too, such as
vip:://hostsite/my/animated/vobject/happyfunanimation/loopmode)

Like I said in another email, when you store a revision of an object, you
change its links to point to the correct revisions of all its child objects,
too, therefore maintaining the integrity of the tree. As long as its child
objects are versioned, that is.

Just one possible implementation, of course!

-Ken


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] delineation and revision control

2007-05-09 Thread Peter Amstutz
Let's see if I can clear up a few things...  Bear in mind that the 
design is still evolving in my head :-)

a) From the perspective of VOS, which is concerned with synchronization, 
the most important purpose of versioning is so you have a basis for 
deciding whether your replica A' of vobject A is current or out of date.  
Being able to record changes and go back into the history is just a nice 
side effect.

b) Each vobject would be versioned separately.  The child list is part of 
the versioned state, but a change to a child vobject should not affect 
the version of the parent.  One exception: embedded children (property 
fields) do affect the version.

c) The big picture I'm going towards here is that remote vobjects, 
caching, scalability/load balancing and the kind of broadcast routing Van 
Jacobson talks about are all basically cases of replication.  If the VOS 
design can unify these cases under one general solution then we've won.

I need to develop the idea more, but I think one key idea is relaxing the 
idea of a site so that it doesn't have to be tied to a specific host.  
I had already decided that sites would be identified by their public key, 
so that messages distributed by the site are self-verifying (by checking 
that the digital signature matches the site id).  This means if you want 
to query a vobject, any replication -- the actual host site, a local 
cache, a third party -- can answer, and you can check that the answer did 
in fact originate from the site.  What's really interesting is that this 
means the authority to publish changes to vobjects rests in who has 
knowledge of the private key that corresponds to the site's public key.  
If you wanted to do load balancing/clustering, you could share that 
private key among several trusted hosts, any one of which would be able 
to issue official state changes for vobjects on that site.

I should point out that in the most common case, hosts will be in direct 
contact with the actual site, and receive messages published from the 
site directly, so it's not any different from the way things work now.

Getting back to Kao's question (which I've wandered quite far away from), 
as noted the synchronization principals in the system are the site and 
individual vobjects.  I don't see how cycles and bounderies between 
subgraphs factor into it, except perhaps in the initial phase of 
determining the specific subset of vobjects on the site you're interested 
in.

On Wed, May 09, 2007 at 11:45:50AM +0200, Karsten Otto wrote:
 Well, Lars' suggestion of versioning only interesting parts and  
 your suggestion of horizons seem reasonable, but I don't think we  
 have the basis for this in VOS. A vobject usually cannot live as  
 isolated entity, but *requires* a number of relations to child  
 vobjects to make sense; thus any user-percievable world object is  
 actually a subgraph of the overall world graph.
 
 The problem is delineation: It is not clear which subgraphs represent  
 independent world objects, or if there is even a distinctive  
 decomposition. For example, two objecs may share a texture - which  
 vobject does it belong to? If you change one vobject, do you  
 include the texture in the version? Where do you stop following  
 relational links? I don't recall if there is any prohibition in VOS  
 against cycles in the graph - I think there isn't - so matters become  
 even more complicated.
 
 The only separation I currently see in VOS is the relation between  
 the site vobject and its children, but even here it is not clear  
 which children represent aspects of the site itself, which are  
 scenery, and which are avatars.
 
 Any suggestions?
 
 Regards,
 Karsten Otto (kao)
 
 
 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Van Jacobson: named data -- revision control

2007-05-09 Thread Peter Amstutz
I don't think Jacobson was suggesting that a really new paradigm in 
networking would be able to handle the robust case of broadcast data, of 
which unicasting is simply a subset.  I find you need a little creativity 
to fill in some of the gaps in the later part of the talk, since he 
wasn't presenting a design (or really, even a complete theoretical 
architechture) but simply some big ideas that he thought were worth a lot 
more attention.

Packet switched networks are fine for realtime communication, provided 
there is enough bandwidth and they arn't congested.  That performance 
degrades rather than simply shutting down and/or denying new connections 
is a feature, not a bug.

As you point out, no existing system adequately fufills the requirements 
of online virtual worlds which precisely why we've spent so much time 
building our own.  The problem is a curious mix of large quantities of 
mostly static data (3D models, textures) punctuated by dynamic data with 
high frequency changes and requirements for low latency.  Throw in 
streaming media to the mix (voice) and it's really an architechtural 
nightmare when you consider the vast majority of network research has 
been in trying to solve each of these problems in isolation and to the 
exclusion of the others.

The reason broadcast routing is so exciting is that we're not just trying 
to solve how to move the latency-sensitive high-frequency bits around, 
but that distributing and caching static resources (geometry, textures) 
is a huge part of the problem, and one that we never adequately addressed 
in the current/old s4 design.  Indeed, moving the latency-sensitive 
high-frequency bits around is easy, because there isn't that much of it, 
and the problem was largely solved by online games ten years ago.

On Wed, May 09, 2007 at 12:10:16PM -0700, dan miller wrote:
 Hi --
 
 I'm new to the list though I have been on IRC now  then.  
 
 I loved Jacobson's talk but one point struck me: the introduction of a new
 paradigm doesn't obviate the need for the old.  Packet-switching is great
 for fault-tolerance when the goal is get this packet from here to there, no
 matter what.   It's actually the postal paradigm (thru wind  sleet...)
 
 But the old telco real-time, hard-wired point-to-point connection was
 actually better suited to some things we do today over packet networks,
 particularly teleconferencing.  The control over latency and timing is lost
 when you switch to TCP (as you VOS folks know only too well).
 
 A data subscription model is really just a cool technological way to
 introduce the concept of publishing to the digital world in a useful way;
 but it doesn't change the fact that packet-switched networks are not so
 great for realtime communication.
 
 WRT VOS/Interreality goals, in particular avatar/object behavior (whether
 scripted or resulting from user input), we have a mix of requirements that
 doesn't easily fit any model I'm aware of.  It's time critical, like a phone
 conversation; it's point-to-many-point, like publishing; it's ephemeral,
 like broadcasting; but it's not fully global, in that typically you only
 care about a few objects in your virtual vicinity.  Distributing this data
 liberally is not an option due to bandwidth.
 
 The bittorrent model doesn't really wash here because of the requirement for
 low latenc.  I think in this case we have another animal entirely, which is
 basically a secure multipoint channel cluster.  The closest analogy I'm
 aware of would be multi-party teleconferencing.  ATT actually does this
 pretty well.
 
 This animal should be optimized for its intended use, and not shoe-horned
 into paradigms that it doesn't really fit.  It might be reasonable to take a
 look at some of the ITU work in this area, such as H.323, and even the IETF
 VOIP/SIP stuff that's out there.  
 
 I'm not saying we should necessarily adopt any such standards; but it is
 often worthwhile to take a good look at how similar problems have been
 tackled, for better or worse.  Otherwise you risk spending mucho time
 reinventing various types of wheels.
 
 -dbm

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d