Re: [vos-d] [www-vrml] Re: [x3d-public] Wanna help the Mass Avatar Mash?

2007-05-11 Thread Len Bullard
Sounds right.   A test mark is understood legally and otherwise.

OTOH, it is best for everyone that X3D apps conform but X3D adapts.  Few
things are 100% right out of the box including the spec.  Since members
determine what that is, why sue each other for what it ain't?  

A horse is a horse unless of course...

H-anim conformance is important, everyone knows that. Collada is good for
moving loosely contracted assets, but H-anim is the crown jewel for obvious
reasons.

len


From: Alan Hudson [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 11, 2007 9:46 PM
To: Len Bullard
 
We have a conformance/certification mark that you are given permission 
to use.

Looks like this:

http://www.xj3d.org/status.html


This is only thing protected.  We don't stop you labeling your software 
as X3D.




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Van Jacobson: named data

2007-05-09 Thread Len Bullard
You put your finger on the major issue:  cost.  The energy budget is a part
of the noise factor of any communications network, artificial or organized.
The web is predated by better designs with regards to noise and is a bit of
a botch with respect to quality in terms of how it has been marketed.  No
surprise there because quite a bit of the technological infrastructure in
use today is marketed that way with the marketing goals dominating the
feedback to the design goals.  The orthogonal pressure of investing schemes
such as hedge funds, private equity, venture capital etc. drive the quality
down further because the squeeze for numbers if not met by innovation come
out of the employees and other aspects of corporate management. 

Predictions that the network models such as the web would lead to this were
made prior to the web tsunami not because of the web in particular although
it is a particularly bad case, but because the laws of second order
cybernetics and complexity indicate this will be the case.

I use examples such as the questions on the blog simply to point out that
even for what some would consider cultural memory with a very high
penetration of exposure for some spatio-temporal event, the distortion
effect of a high intensity noisy signal over a much shorter time at a higher
bandwidth is sufficient to degrade the reliability of any copy.

The cost of purchasing a vetted copy of the series, watching the first
episode (sufficient to answer all of the questions correctly therefore to
pass a single test of the reliability of the source) is quite minimal.  The
cost to correct the damage across the culture is not.  So while the value of
that particular corpus may not significant, it is easily demonstrated the
modulation of frequency and amplitude for a signal of high impact is
sufficient to distort a high value decision.   Again, the first three pages
of Shannon's seminal work provides the basic model of selectors (decision
trees).  To apply the model socially, behavioral science is a sufficient
model.  To figure these into a hypermedia system design that can adapt to
distortion or to create distortion, a second order cybernetic model is a
good start plus some study into signal filtering models.

The web isn't actually designed to be dirty.  It designed not to care if it
is dirty or clean.  It is a minimalist contract much the way a virus is a
minimalist interface for propagation without regard to host degradation.

The web doesn't care.  You have to.  That's the deal.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Lars O. Grobe
Sent: Wednesday, May 09, 2007 2:39 AM
To: VOS Discussion
Subject: Re: [vos-d] Van Jacobson: named data

 In effect, regardless of the wrapper, unless you have the original 1959
 first episode of Rocky and Bullwinkle, you probably can't answer those
 trivia question correctly.

There are some approaches to organize these decentralized verification 
processes in the field of certificates. E.g. cacert.org, where you need 
a certain credit to sign and need a certain number of signers and 
documents verified. Maybe one could think about something like that for 
digital content. If I get the episode from 1959 as digital with the 
signature from a public library, I might trust it. If not, there might 
be a second one around, signed by another library, and if both are 
identical I might trust. Or one copy signed by two libraries. The 
question is if those libraries will spend the money on people verifying 
their digitized content, as this is not to be automated. And most of 
them suffer already from the efforts necessary to digitize content 
without proof-reading... Maybe the cheap, quick and dirty character of 
the web is part of its very nature, with proofed and verified content 
existing only on some small expensive islands... ;-) Lars.


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Van Jacobson: named data

2007-05-08 Thread Len Bullard
Understood completely and I know how SSL, checksums, asymmetric keys, etc
work but without the understanding that content drifting away from its
original sources corrupts means the buyer doesn't understand the technical
solution is not the whole solution.

In effect, regardless of the wrapper, unless you have the original 1959
first episode of Rocky and Bullwinkle, you probably can't answer those
trivia question correctly.  If you don't have the authentication and
authorization, you don't have access to the original source.  If you don't
have the digital signature and checksum technology, I can't trust your
answers without the original sources.  This is the real problem of named
data sharing.  Otherwise, URIs with registries make the name sharing easy,
and the rest is authentication, authorization, signatures, etc.   I don't
think the problem of discoverability is as big as the speaker believes it
is.

It isn't just trust.  It's verification.  For that, you must have an
authentic copy of the original source or access which amounts to the same
thing but if access, you have to prove that.  Names alone won't make that
happen.

len


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Peter Amstutz

Well, on a technical level you have digital signatures that give you a 
technical way to verify that information from a given source has not 
been tampered with.  Provided you trust that the public key used to sign 
that data did in fact come from that entity, of course, but trust has to 
start somewhere.

On a social level, you're right, people tend to introduce errors (either 
accidentally or deliberately) in information.  There isn't a technical 
solution to that.  But that's not the kind of transmission we're dealing 
with; we're only concerned with exact digital copies.  Whether the 
source itself is an eyewitness account, a newpaper article or a 
wikipedia writeup, the goal is simply propagation of the actual digital 
document without allowing for the introduction of errors into the 
document itself.




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Van Jacobson: named data

2007-05-07 Thread Len Bullard
Versioning yes, but also vetting and revetting of sources.  The further you
get from original sources in any communication system, the more noise you
incur without adequate checks.  Shannon 101.  Names alone won't do it.

I put a trivia test at my personal blog just for a Do you trust Google and
Wikipedia test.  The problem is one of not starting from an authenticated
or original source.  If you start from wikipedia to answer those questions
without the original source, you will get about half of them wrong or near
wrong.

Modern Internet traffic worries about efficiency but typically the data is
short lived.  If you live where I live you get to watch a fascinating
change: NASA is hiring as many sixty and even 70 plus year old engineers as
they can find if they have actual J2 series engine experience.  The original
sources and digital systems failed to keep enough documents alive.  They
have the designs but like the Canadians who tried to rebuild the V2 engines
for their contest submission, they don't know how to run them and it turns
out the devil is really in the details.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Peter Amstutz


Summary: 

First 40 or so minutes explaining why networks up until now evolved the 
way they did.  Circuit-oriented telephone networks evolved the way they 
did due to specific ways the underlying circut switching technology 
worked (going back to human operators working a switchboard!).  
Packet-switched networks were revolutionary because, unlike the phone 
system, they were agnostic to the underlying transport medium.  TCP/IP 
was designed for point-to-point communication based on the assumption 
that the primary use of data networks would still be for point-to-point 
conversations.  Also, TCP/IP was designed in an environment where each 
computer had many users, by constrast with today, where you have many 
computers per user.

The second part of the talk describes where we are today, and how 
networks can be adapted to make it better.  Modern Internet usage has 
evolved such that the vast majority of traffic is better described as 
broadcast traffic rather than point-to-point: publishing web pages, 
streaming video, file sharing, even email in the case of mailing lists.  
This is very inefficient if many users are requesting the same data at 
once.  Another problem posed by current architechtures are the 
challenges of data synchronization between devices, which can also be 
traced to the fact that devices are often required to synchronize on a 
peer-to-peer basis, rather than having a mechanism to broadcast changes 
to other devices.

The proposed solution is a bit light on details, but big on ideas: to 
deal with problems of scale in the age of Internet publishing, we step 
away from our notions of purely fixed-address, point-to-point 
communication, and consider that in many cases, it is highly desirable 
to be able to automatically replicate and propagate that data.  In the 
example given, when you access the New York Times (newspaper) front 
page, you shouldn't care whether the actual data you get is served from 
the NYT web server, or from some other downstream server that has a copy 
-- provided you can verify that it originated from the NYT by checking 
the digital signature.  One significant idea mentioned was that, in the 
way that TCP/IP abstracts the underlying physical transport layer, such 
a system ought to be abstracted from the protocol layer -- so that data 
can be propagated by whatever physical or virtual means are most 
appropriate or available.

He points to Gnutella and Bittorrent as examples of trends in this 
direction.  Each system demonstrates the two key properties of this type 
of approach, that once something is published and replicated a few times 
it may stay in the network even if the original source is no longer 
available, and that popular resources are inherently load balanced by 
virtue of the fact that the more people access a resource the more 
intermediate servers will have a copy.  Unfortunately he didn't seem to 
mention Freenet (http://freenetproject.org), which to my knowledge is 
the most complete implementation of many of the ideas he's promoting.

Commentary:

This talk is primarily aimed at spurring people to do more research in 
this area.  For this reason, it poses many questions but provides few 
concrete answers as to how such a system would be put together in 
practice.  He helpfully separates it out into the easy stuff (problems 
for which reasonable solutions already exist) and the hard stuff 
(everything else).

He doesn't really touch on the highly dynamic nature of current web 
sites.  When every user is served a custom web site, complete with 
widgets and ads personalized to their zip code, it's much more difficult 
to replicate in a useful way.  Of course media (sound, images, video, 
maybe 3D meshes later on) are usually not (yet) dynamically generated, 
and 

Re: [vos-d] Metaverse Roadmap

2007-04-20 Thread Len Bullard
They manage thoughts and ideas toward control attractors.

It is one part Electric Sheep (a content builder for SecondLife) plus the
usual New York VR cabal.  SL needs an independent front organization to for
its effort to create a standards patina around their technology.  Actually,
this sort of thing can become very serious very fast because it is fueled by
external sources feeding and paying for press.  This was done with the W3C
in the early days and used to pirate the status of the legitimate standards
organizations.  This works for the company sponsors just as it did for those
who footed the bills for STimBL and crew at MIT.

I call it The Standards Game.  Everyone knows how to play it now.  The
thing to pay attention to is the participation agreements that determine by
membership contract what the conditions for contributing intellectual
property are if and when they actually do any real work beyond pontificating
and holding seminars.  That is where the rubber meets the road.  OTW, yet
another kaffeeklatch and that is fine.  Every street corner has a Starbucks.

On the other hand, take a look at that participation list.  That is a lot of
luminaries including Castronova and Dyson.  These people get on board, raise
money and drain resources to their own pet projects without building too
much.  The Venture Capitalists love these guys because they are Judas Goats
for other investors pulling a lot of money toward their interests.   Note
the presence of Joi Ito (content must be free; I must keep my Porsche).

This is a serious bunch though if you look, not a lot of them are building
worlds.  They are getting mindshare for the few on that list that do
(Koster, ES, Metaverse, etc.).  Castronova was given a MacArthur Genius
Grant to do a project done in VRML ten years ago and already working in JOI.

len


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Peter Amstutz
 
I've seen it.  Honestly I think it's mostly self-serving promotional 
fluff by people with vested interests in hyping their technology.  I 
suppose it's useful from a PR standpoint of promoting immersive 3D, but 
it doesn't really offer anything concrete that anyone who might be 
interested in building on metaverse technology could plan around.




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Flux Worlds Server Announcement

2007-03-31 Thread Len Bullard
First of all I should mention that I don't speak for VOS/Interreality 3D --

which you seem to be assuming I do. I'm just an enthusiast following their
progress and hoping to contribute a bit. 

I've lurked on the list for a few years now.  There are lots of projects but
this one has staying power based on the core of people building it.  That I
support.  I am a VRML content builder among other things, but I support
real-time 3D in general.

I know what *I* want the metaverse
to be, and I'm especially annoyed at the Lindens for attempting to
appropriate the term and therefore am a bit sensitive to new press releases
making hype about the metaverse. And that's basically it.

That will only frustrate.  The press, the Lindens, their investors, all will
work hard to create a patina of invention and legitimacy up to and including
rewriting history.  That is how the web was won.  Can you imagine how
irritating it was for the SGML hypertext community to read that Tim
Berners-Lee had *invented* hypertext and THE hypertext markup language?  The
history wasn't that well known so it worked at scale.  Hype works.
Investors expect it.  The way to fight that is to correct the press, but
don't fight over terms like *metaverse*.  It is already a hype term that has
very little meaning.  

What is a 'metaverse'?

We have the same problems with 'virtual reality'.  It is just a genre of
real-time 3D.   VOS has yet to find a genre that is easily summarized.  That
might be good because it continues to fly under the radar.  About the worst
thing that can happen is to have the press locusts descend on it before it
is ready.  I can't count the number of web projects gone South that I've
seen because the fringes decided it needed a big press boost or more cred
than it had earned.  Of such is a bubble made.

The VRMLers are careful to acknowledge VRML's roots in practical commercial
products, eg, SGI Open Inventor.  As a result, when a blogger or press
release talks about how VRML was created on the web, but is not a practical
product, it is easy to point to the evolution from the SGI product line and
correct that.  One thing the press really hates is to have their credibility
ripped from them with factual reporting.  

I do think -- if they do it right -- Flux Worlds will be a useful product
and an important step in open-standard virtual-worlds. But I maintain that
it's an evolutionary step, not a revolutionary one -- 

There are revolutions of technology and revolutions of scale and market.
HTML was not a revolution.  It was a design that was decades old.  The
markup design was essentially the work of Truly Donovan, not Tim or Dan.
The US Army had a DTD-less stylesheet driven markup hypertext browser years
before XML.  HTTP is even less of a revolution.  In combination, they caused
a scaling effect that was a market revolution.   A generation of
not-very-adept programmers picked it up and did cool things with it, but the
generation that took it to the next level was already very adept and mostly
40-somethings.  The press didn't find that very good reading.  Fifteen years
later, none of it matters, but don't underrate the power of the press to
fuel a revolution in market where there was no revolution in technology.

 and it's no reason not to aim farther ahead, or to abandon all alternate
 paths. 

I agree and those paths are also no reason to slag the sincere and working
efforts of the VRMLers to get the next piece of their puzzle in place
because of the term 'metaverse'.  The press made the term popular, not the
technologists.  You don't own it.  The Lindens don't.  Parisi doesn't.
Everyone will use it as they see fit.  It may even die fast because it is a
hype term subject to dissolution because it has no insolvent core meaning.

Also, as far as I've seen, VOS isn't making lots of publicity or
preannouncements -- 
Peter, Reed et al have been quietly working away for a few years trying to
get a good base technology working from the ground up. And they *do* have
running code. 

I know.  I keep track.  I am waiting to see what this emerges as because so
far, it is *geekSpeakBound* and while that is good for the programmers, it
won't mean a thing to the content developers or the market.  I'm waiting for
that synergy when hot content and new technology merge.   I warn you though,
technology is largely invisible.  If VOS creates yetAnotherSocialSpace, it
is an also ran.  Customers are never wowed by how neat your classes are.

So I'm really not sure where a lot of your comments are coming from.

25+ years of experience.   Don't get hung up on the terms or claims to
primacy as if this project were THE Metaverse.  That will just earn these
guys enemies where they don't earn them themselves and critics where it
isn't in need of critique.  So far, VOS is a small personal project with a
mail list and some running code, but nothing yet to show that will impress
the market.  

I am impressed by the staying power of the core contributors.  

Re: [vos-d] Flux Worlds Server Announcement

2007-03-30 Thread Len Bullard
We learned the really hard way in the early years of VRML to take it slow
when trying to create something intended to scale out the Internet.  I won't
quarrel with your requirements but I also am very leary of anything that
reeks of Snowcrash-like thinking or visions.  We got burned because that
sort of thinking leads to expectations that are not just impossible in the
short term, they aren't achievable.  VRML still gets hammered in books,
blogs and other articles for overreaching and not hitting those marks
despite the fact that as a standard, ten years later, the content still
works with current tools and a lot of current content sill works with old
tools.  That is what a standard has to achieve.  Don't overdrive your
headlights.  It's a deadly mistake.

So I'm not criticizing, but take it to the bank that Parisi knows better
than anyone what the cost of overreaching is.  Your 'metaverse' is worth
having, but he is good ways down the path of making his work now and he has
been at the forefront of getting such standards built for over a decade now.
He isn't 'now' realizing; he paid the price in blood before most of the rest
of the current crowd was even trying.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Ken Taylor
Sent: Friday, March 30, 2007 9:10 PM
To: VOS Discussion
Subject: Re: [vos-d] Flux Worlds Server Announcement

Remember, the Metaverse needs open protocols. Without them... everything
else is Just a World.

I agree with this, and I'm glad that more and more people are realizing it.
However, though it's necessary, it's not sufficient to create the
metaverse. Some other requirements I would use to evaluate any metaverse
system:

- Not only can anyone run a server, but they can easily interlink and users
can freely traverse the spaces between different servers, with the potential
for seamlessly connecting virtual spaces (eg, with portals).
- The protocol is future-proof and can keep up with developments in
technology and new ideas for interaction while maintaining backwards
compatibility and a reasonable experience for those with hardware or
bandwidth that can't support the latest-and-greatest
- The protocol should support use creation and ownership of content,
including a flexible scripting system, and the ability to transfer
user-created content between servers. It should support collaborative
editing and interaction with content.
- The protocol is highly extensible through 3rd-party plugins and not
locked-in to whatever the committee decided at standardization time was,
for example, the best parameterized-avatar system/voice-chat
system/streaming-video protocol/physics system/what-have-you. However, at
the same time, there should be a robust baseline spec that allows all
users to have a decent experience even with no plugins added.

I definitely see VOS as having the potential to meet all these requirements
and beyond. I can't really tell from the press release how far they are
planning to go with flux worlds. My guess is it's going to be
yet-another-shared-space-server, this time based on X3D and easily
integrated with web pages.

So it'll probably be really neat, but not the metaverse yet ;)

-Ken

- Original Message - 
From: Tony Parisi [EMAIL PROTECTED]
To: 'x3d-public list' [EMAIL PROTECTED]; 'www-vrml'
[EMAIL PROTECTED]
Sent: Friday, March 30, 2007 2:08 PM
Subject: [www-vrml] Flux Worlds Server Announcement


 Folks,

 We've been up to something over here - thought I would tell you about it
 before you heard it on the street.

 Media Machines has been developing a multi-user server based on a new
 protocol that we intend to put out into the open. We have dubbed it Simple
 Wide Area Multi-User Protocol, or SWMP (pronounced swamp). The intent is
 to work with Web3D and potentially other organizations to standardize
SWMP.
 We will also supply a basic open source implementation. Our overriding
 goal-- one that we are pursuing with total passion and vigor-- is to
create
 an open infrastructure for the Metaverse.

 We have wrapped SWMP into a server product called Flux Worlds. Flux Worlds
 is currently in alpha test. While the product is still several weeks away
 from beta test, we announced it yesterday with the goal of attracting
early
 signups for the beta. We are also integrating a prototype of the new X3D
 networking nodes being developing by the Networking Working Group, right
 into Flux Player. The results look promising.

 Anyway, here is the announcement. We would love to have you be part of the
 beta when it's ready!

 http://www.mediamachines.com/press/pressrelease-03292007.php

 Remember, the Metaverse needs open protocols. Without them... everything
 else is Just a World.

 Yours virtually,
 Tony

 -
 Tony Parisi  415-902-8002
 President/CEO   [EMAIL PROTECTED]
 Media 

Re: [vos-d] Flux Worlds Server Announcement

2007-03-30 Thread Len Bullard
Because when you don't hit the mark, it causes people to throw the baby out
with the bathwater.  Visions are ok, but lots of publicity and
preannouncements without the goods takes on the rep of being snake-oil.   It
is a very bad strategy to get into a market with the aim of eliminating all
competitors.  It screws the customers over.  Netscape made that mistake and
they had their collective rears handed to them.

To me, metaverse is just another word.  It has no ownership and only as much
meaning as there is running code to support it.  It's like saying 'Heaven'
and then setting up the dogma without the pragma.  So no offense, but don't
get possessive with a term.  Offering up suggestions is cool.  It is cooler
to offer them with running code.

He isn't exactly playing it safe.  He has code, he spent the company money
to get it ready for open source, and he has the access to the standards
editors.  Is it more of the same?   We've had shared spaces but no standard
for hooking those up without buying Blaxxun Community server or any of the
various other products.  All we've had is client standards. What Tony is
talking about is a protocol that can be implemented anywhere, and possibly a
reduction in costs for authors to have worlds that is ten percent of what it
costs to host at LL.  That is significant.

Now don't get me wrong, I am not quarreling with the VOS vision or the work.
I'm saying if he manages to get this done, it puts something on the street a
lot of people need, so stepping back and saying that's not a true
Metaverse is sour grapes.  No one really knows what a metaverse is.  You
know what you want it to be and that is cool, but not authoritative or a
reason to say he doesn't have one. We got in trouble by promising a science
fantasy world that couldn't be built then if ever, and it turned people off
to 3D on the web for ten years.   

If he gets shared spaces out there for ten percent of the cost of SL and
does it with content standards that enable the content to move around among
worlds without being hostage to  POTS server farm market, that is an AMAZING
and very revolutionary accomplishment.  Give him cred, then get back to your
vision and show the next new thing when it is ready to show.   It's bad
karma to climb up the backs of other swimmers trying to stay afloat in the
same ocean.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Ken Taylor
Sent: Friday, March 30, 2007 9:59 PM
To: VOS Discussion
Subject: Re: [vos-d] Flux Worlds Server Announcement

I think that a basic open-standard shared-space server based on X3D is an
obvious and safe step -- Flux Worlds will probably be very successful at
what it does. But in my view, it doesn't bring us any closer to the
metaverse and I hate when people through that word around (ever since the
Lindens started doing so). To me it really is just more of the same -- we've
had shared space servers around almost as long as VRML itself, and we're not
that much closer to a true interconnected 3d universe on the internet.
Someone has to stop playing it safe for anything revolutionary to occur.

But what's wrong with snowcrash-like thinking or visions anyway? ;)

-Ken

- Original Message - 
From: Len Bullard [EMAIL PROTECTED]
To: 'VOS Discussion' vos-d@interreality.org
Sent: Friday, March 30, 2007 7:47 PM
Subject: Re: [vos-d] Flux Worlds Server Announcement


 We learned the really hard way in the early years of VRML to take it slow
 when trying to create something intended to scale out the Internet.  I
won't
 quarrel with your requirements but I also am very leary of anything that
 reeks of Snowcrash-like thinking or visions.  We got burned because that
 sort of thinking leads to expectations that are not just impossible in the
 short term, they aren't achievable.  VRML still gets hammered in books,
 blogs and other articles for overreaching and not hitting those marks
 despite the fact that as a standard, ten years later, the content still
 works with current tools and a lot of current content sill works with old
 tools.  That is what a standard has to achieve.  Don't overdrive your
 headlights.  It's a deadly mistake.

 So I'm not criticizing, but take it to the bank that Parisi knows better
 than anyone what the cost of overreaching is.  Your 'metaverse' is worth
 having, but he is good ways down the path of making his work now and he
has
 been at the forefront of getting such standards built for over a decade
now.
 He isn't 'now' realizing; he paid the price in blood before most of the
rest
 of the current crowd was even trying.

 len

 -Original Message-
 From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
 On Behalf Of Ken Taylor
 Sent: Friday, March 30, 2007 9:10 PM
 To: VOS Discussion
 Subject: Re: [vos-d] Flux Worlds Server Announcement

 Remember, the Metaverse needs open protocols. Without them... everything
 else is Just a World.

 I agree with this, and I'm glad that more and more

Re: [vos-d] How to host a product design dinner party

2007-03-22 Thread Len Bullard
You know more about that niche than you think you do.  Figure out what VOS
trades with.  Transactions determine niche boundaries and more.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Reed Hedges
Sent: Thursday, March 22, 2007 3:27 PM
To: VOS Discussion
Subject: Re: [vos-d] How to host a product design dinner party


We don't know what our niche is yet.  We have one main domain (3D)
and a secondary domain (Web) but there might even be others.
Actually when we first began this several years ago, we knew  someone
who knew someone intersted in building factory tracking systems, though
we ended up not really considering that at the time.  At one point I
wanted to do wireless self-organizing sensor networks-- I still think
that will be an emerging realm of innovation but I know that VOS is
not a good fit for its requirements.

I think we have some vague ideas on what we want our specific niches 
within 3D to be-- Peter and I may not even be able to explain it 
well yet.

So, we're just trying to implement what we can, so we can show it to
people and eventually find a niche.

Reed


On Thu, Mar 22, 2007 at 11:22:10AM -0500, Len Bullard wrote:
 The urge to focus on one single application is normal, but if you are
 building a toolkit such as VOS, it would be deadly.  You're doing the
right
 thing, but it violates two of the web myths: easy and simple.  Simplistic
 analogies will sell it perhaps, but don't get trapped by your own press.
 VOS won't be a tool everyone can use.  The niche that can can do a lot
with
 it.  I think that is Reed's point, yes?
 
 len

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] XOD questions

2007-03-16 Thread Len Bullard
I agree 150% because I begged for that clean up.  The unfortunate reality
was the people designing the Schema weren't that experienced with Schema
design OR XML and they did some ill-conceived things.  Schema was very new
when they started (else RELAX would have been a better choice but it didn't
exist then).  Because I wrote the first DTD for VRML, the geometry straw
man, I watched with horror as the Schema progressed, but because I was
working for Intergraph where VRML and 3D on the web had been dismissed out
of hand and *with prejudice*, there wasn't much I could do to be forceful.

I'm just saying the Schema per se isn't the problem.  You could ignore that.
It is the resulting warts in the instance that make it a PIA for loaders or
transforms.  I understand the disgust with CDATA.  Microparsing is a
solution for some problems but bad juju overall.

There is a reason I, a markup wonk if ever there was one, stick to Classic
VRML and VRML97.  I will move on to X3D because eventually I will need some
of the new features like Inlines with interfaces and bits like the Keyboard
Sensor, the upcoming Network Sensor and the physics engine, or the
Nice-to-Haves like the Boolean Sequencer that I can replicate in script but
a node is easier.  For now, I am building in VRML97 where the weather suits
my clothes.

Didn't mean to interrupt this fine design process.  Back to the lurk.  Best
of luck with VOS.  It should be quite cool.

len


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Peter Amstutz
 
My basic complaint with X3D was that the transition to XML could have 
been an opportunity to clean up a lot of the syntactical warts of VRML 
(delimiting index face sets with -1, for example) while still keeping 
the basic data model.  We could have had a schema that follows the best 
practices for XML document design, follows XML data types, etc and would 
have been a breeze for developers to support.  Instead, they choose to 
do a translation that amounted to little more than replacing curly 
brackets with pointy ones.

Having written a minimalist X3D loader (just geometry) it irked me that 
I had parsing issues -- Parsing Issues, in XML! -- between files 
produced by two different programs because of the extra syntax embedded 
in CDATA that wasn't part of the DOM structure.

I'm not saying it doesn't work, it is primarily an aesthetic issue -- 
but it does cause headaches for developers.




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] X3D

2007-03-16 Thread Len Bullard
V-Realm Builder was and still is excellent.  I won a copy in a contest over
a decade ago and that was my entry point. I still use it because it is all
VRML and has a great terrain editor and index face set utility, and easy
treeview interface, great support for sequencing and routing, etc.  Big
pieces of ROL were done there.  OTOH, I like Flux Studio.  It is powerful,
has fantastic advanced geometry editing with a drop dead easy interface, and
well... I still haven't gotten through the feature set although I blogged it
at http://3donthewebcheap.blogspot.com  I have to admit that for scene
assembly, PFE is my work horse because I really don't mind native code.  At
some point, one really does have to learn the language and you learn to spot
bugs fast as well as how to make better protos once you see repeating
structures.  After awhile, VRML scene graphs just start to make enormous
sense from the author's perspective.  That is my only caution to the
object-oriented programmers:  it is too much code geekery for an author in
too many cases.  A professional game programmer, yes, but the average kid
getting started, no.  They need to learn a language in the sweet spot
between OpenGL and 3DML.  Go too high level, there isn't enough power.  Go
too low level, it takes to much work to do basic stuff.

I'm starting to post tutorials there regularly.  It is breezy but it is a
blog.  I am describing my processes for building with code samples as much
as anything to give the kids a free place to get the information and a ton
of philosophy about my own story telling processes.  

I call it 3D On The Web CHEAP! because I'm not a believer in the 'ya gotta
buy a server spot at SL' or wherever or 'ya need a copy of Poser and 3D
Maya' whatever to get into 3D.  That wasn't the original promise of 3D on
The web before the new guys rebranded it as 'the metaverse', a term I
consider kind of dumb (More Meta Than Thou is the death spiral of design).
It was 'get an ASCII editor and a browser and if you got the moxie you can
go 3D'.  I realize the need for powerful tools because real-time 3D is a lot
harder than HTML, but I also know that the key to a real metaverse is
accessibility relative to costs for the new comers.  Otherwise, we might as
well all go back to writing records management systems.

I believe 3D and games are to this generation what rock n roll was to mine.
The best thing I can do for the kids is hold a door open for them as long as
I can, because this is their thing and they need to have their own thing,
not ours.

I really need to shut up. You guys have work to do and I have a blog for
pontificating.  Thanks for the space!

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Reed Hedges

Fortunately, it sounds like some good GUI editors are coming out (like
Flux studio).I never did manage to find a really good GUI tool for
building VRML97 that was really focused on VRML and supported all of it,
though I was only looking at the free ones.

(Back when I was able to actually do 3D for a job I ended up exporting
VRML from a simple modeler called AC3D and then mucking about in it by
hand slightly.)




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] XOD questions

2007-03-15 Thread Len Bullard
How is XML restricting you?  It doesn't care how you use the tree.  There
are things that look silly to an XMLer that may have a legitimate
application.   Bits like 

parent name=''
children/children/parent

look like someone didn't understand structure given by XML, but they aren't
illegal and there can be reasons to do that.  Things like name='name' look
worse but I am told there are reasons to do that too.

XML doesn't care.

OTOH, I generally agree with Peter that scattering information across a file
has costs.  Some aren't obvious for small files and RAM rich machines, but
as the file sizes increase, the lack of regular and compact structures will
start to cause inefficiency system-wide or so the theory goes.  In terms of
all of the exceptions you have to write in XSLT templates, this is usually
true.

In VRML, one is told one can put ROUTEs anywhere because they aren't part of
the scene graph.  OTOH, ParallelGraphics wants them near the nodes they
connect.  Habitually, we put them all together at the bottom of the file
because of habits acquired early in the VRML era.  Now implementations
confuse us and practice doesn't serve us.  So IME, it is better to have
tighter structures earlier and ask if they are keeping you from doing
something, or simply making you be disciplined.

len


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Reed Hedges

I don't see why the XOD format should prevent stuff like that-- which
are important abilities we have in VOS.  If we have link why not
parent (or the parent attribute)? parent would just be the inverse
of link.

XML is too restrictive here. XOD is already very specific to VOS in its
element names; if you want to reuse a XOD along with some other XML
you're going to be applying a trasformation to it anyway, and I'm pretty
sure XSLT and XPATH are powerful enough to gather objects declared in
different places in the file that have parent child relationships.



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] XOD questions

2007-03-15 Thread Len Bullard
Be fair, Peter.  It is the X3D instance you have to import, not the schema.

The schema is baroque to put it mildly.  I'm not sure if it is used for
import.  BS Contact has a validating switch, but I've not tried it.  The
schema is useful for the x3d-edit utility, but even then, not too many peole
edit graphics like a document unless the are already very familiar with the
tree.

len


From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Peter Amstutz

Well, the idea was more to support the ability to import other file 
formats (X3D comes to mind, although it's maybe not a good example since 
it's really an example of how not to design an XML schema) using a 
straightforward XSLT transform.  Of course, we haven't yet tried writing 
any transforms so I don't know if it's actually feasable.




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Is this helpful?

2007-02-06 Thread Len Bullard
I'm not sure I get what you are after.  I have software to compress audio
(eg, make an MP3), resample it (eg, make 32-bit into 16-bit and reset
44.1khz as 22khz, 11khz or yeaccchhh 8khz).   What you may be looking for is
something like Soundcast that streams it to the PCs.  JOI uses that as I
recall for their in-world radio programs.  This uses the web browser sound
so I am told and the directional sound properties are lost doing that.  

For example, when building an immersive album with VRML, I use the spatial
audio (22khz mono) for ambient sounds (water, wind, doors open and close,
etc.) and turn the spatial off for presentation audio (eg, songs,
backgrounds for presentations).

So not exactly sure what you are after.  The Soundcast sounds *pretty good*
compared to say chunking it into 11khz mono (never drop below 22khz or
musicians will hunt you down and flay you with dull guitar picks).  OTOH,
nothing I've heard on the web is as good as wav files (don't start me going
on about mp3: it is turning your ears into tin) at 44khz stereo.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Jason Heblack
 
Is this helpful?

Does anybody have softwares which let you shrink your audio and then
send the file as fast as possible so that after composition it could be
played back quick in the model of *Citizens' Band radio* (/CB/) but
instead using better sound?




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] VOS requirements

2007-01-25 Thread Len Bullard
X3D has a physics specification underway.   One is already being integrated
into Contact.  X3D already has shaders and scripting plus a metadata node
for indicating semantics.  Since the objects you mention below can be notatd
as say DEF Tree and referenced by that name, I'm not sure what you want for
semantics past that which won't create a badly layered design.

Collada was designed as a transfer format for games.  It is compatible with
X3D.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Karsten Otto
Sent: Thursday, January 25, 2007 4:05 AM
To: VOS Discussion
Subject: Re: [vos-d] VOS requirements

Am 24.01.2007 um 19:05 schrieb Peter Amstutz:

 I agree.  I have the book they published describing the COLLADA spec,
 and intend to base the VOS 3D data models on COLLADA wherever it makes
 sense, including physics parameters.
 [...]

I guess I'll have to chime in at this point...

I only took a quick glance at COLLADA, but it struck me again as a  
format primarily intended to model the *appearance* of a scene,  
rathern than its semantics. In that it isn't much different from,  
say, X3D, only that it may have more up-to-date features such as  
physics, scripting, and shaders. So once again:

A shaded ball on a shaded cylinder being transformed to bend slightly  
along the Y-axis is just that, it is NOT a tree bending in the wind,  
even if it *looks* like one.

If you ever expect any kind of autonomous machine interaction with a  
VOS world, please design the 3D data model so that it can co-exist  
with a semantic data model. Even better, make the semantic entity a  
first-class memeber of the world, and put the appearance in a child  
(or even external reference), i.e.

world ---member-- tree ---appearance-- (cylinder, sphere)
world ---member-- house ---appearance-- (box, extruded triangle)

much better than

world ---member--- (cylinder, sphere)
world ---member-- (box, extruded triangle)

and expecting anybody else than a human with a visualization program  
to ever get the meaning you want to convey. Of course, if you saw  
Matrix a few time too often, there is no better way to  get rid of  
an agent than placing it in a world it cannot possibly make sense  
of :-)

Touching another topic, this kind of semantic design also gives you a  
selection criterion for scene querying and caching, reducing  
bandwidth and memory overhead. If your client is only interested in  
trees but not houses, why should it download the complete appearance  
definition of the house? Also, if it has no 3D display capability, it  
might want to download only the member metadata, and possible a  
different form of appearance such as a text or 2D icon.

Regards,
Karsten Otto (kao)

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] SecondLife client goes Open Source

2007-01-08 Thread Len Bullard
It was expected.  It gives them a way to push the financing of the
development off to organizations like IBM and to claim they are an open
platform.  They need to do something to stop the burning of the VC capital
and they have to solve out some very difficult technical problems.

Expect yet-another-big-burst of CNet articles.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Or Botton
Sent: Monday, January 08, 2007 9:38 AM
To: VOS Discussion
Subject: [vos-d] SecondLife client goes Open Source

LindenLab have just opened the source code for the SecondLife client.

http://secondlife.com/developers/opensource/

This step has actually surprised me - I didnt think that they were  
anywhere near doing this for the next two years or so.

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] SecondLife client goes Open Source

2007-01-08 Thread Len Bullard
Letting out the viewer is something of a SOP.  I think the server-side is
possibly more important given that there are any number of open source
viewers out there for 3D platforms that are just as good or better.  It is
the management of the server farm that makes the difference, that and a big
budget for marketing.

Yes, I think they are looking at migrating the building market, but the only
thing that brings in the bigCos is the site traffic.  Otherwise, to Sears,
there is no advantage to being there.   IBM can talk a lot about boardroom
VR but they are a services company in this market and without other
companies willing to host on private farms, there is no market.

There is a lot of puff in the online worlds market.  Of what value is it to
own content that you can't move because it only works on that platform?  So
like a Macintosh or a Mall, without a big membership that is actually going
there often, having a presence there is largely a decorative bauble, a loss
leader for being 'in the know'.  This market is relying on the naivete of
the IT groups of the companies hosting there.

The in-world economy is a fascinating experiment in waiting to see when the
Feds will begin to look at it the same way they look at church bingo.  They
tend to wait until the value is high enough that they can safely take their
cut without killing the game.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Or Botton
Sent: Monday, January 08, 2007 10:06 AM
To: VOS Discussion
Subject: Re: [vos-d] SecondLife client goes Open Source

Granted, it was expected, but there is one major issue thats a big  
bad omen: And thats content copy protection.

SecondLife has been largely tauted as a place where you can make a  
quick buck by creating and selling copies of content. This is  
mostly an artificial market created by placing DRM on objects - being  
able to flag a texture, model, script or an entire package as non  
copyable, modifyable or transferable.

Personally, I am all for an opensource platform with no DRM involved.  
I believe that a VR platform can only become mainstream and  
widespread if it is open and free. But SecondLife's act is more self  
destructive because by nature they are not open and free.

With the source out, it would be a rather easy task to duplicate  
models and textures of objects, pretty much breaking the DRM with a  
very casual effort from the programmer. This could be very damaging  
to their internal economy. Again, I do not support the concept of  
having virtual economies, but doing what they just did is more like  
shooting their own foot.

Perhaps this signs that LindenLab now views the big gamers -  
companies and such as the real customers now? These people will have  
much less of an issue to enforce their copyrights then the regular  
person.




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] SecondLife client goes Open Source

2007-01-08 Thread Len Bullard
That's a fair comparison, Peter.  There are many lastGen web marketing games
being played and some even older Hollywood tricks like the dust-up between
the Graefs and the media that published pix and vids of the famous flying
penises griefer incident imposed over her avatar image.  CNet poses it as
'virtual world' rights incident and a copyright infringement which it isn't
but it gives CNet another excuse to put up yetAnotherSL-related story.  The
Hollywood Catfight for Publicity is a well-worn trick.

Currently we are in a simultaneous phase of do-overs (VR_The_New_Thing: WE
DO IT RIGHT THIS TIME!) and brand devolution as stalwarts such as CNet
become web'loids manufacturing stories and controversies to get eyeballs.
Depending on your point of view or market, having real-time 3D come to the
front of the pack in this environment may be a curse or a blessing.  The
good news is that the technology is being taken seriously again as a market
in the business development offices of major companies; the bad news is the
MAC-Is-The-Platform-of-Choice, aka, the closed systems marketers, are
leading the charge.  Lots of pundit sites such as Terra Nova are repackaging
worn clichés but getting academic grants for them.  Bruce Damer is looking
for help in documenting the History Of Online Communities.  There is sort of
a bum's rush by some to be seen as the Gandalfs of VR and I have to suspect
some of them are Sarumans In Saris but hey, they keep the presses running
stories about VR and real-time 3D and that is good for all of us.

Meanwhile everyone is trying with every blurb private or public to kill VRML
and X3D because the Web3DC is sitting on the ISO gold standard; so, when IBM
steps forward and claims that there are no standards for 3D On The Web, IBM
looks sort of stupid.   The truth is, there are but they are royalty free
and unencumbered and that messes with their plans to get that 99% because
there is no complexity moat for the client side, and that violates the
classic Warren Buffet rules for evaluating a start-up or technology
(barriers for competition).

For niche players, the off-the-web applications of web technologies have
promise and have gotten serious attention because of major contracts in the
Federal markets.  The entertainment industry still doesn't quite know what
makes this NOT a game market and most of the nova-pundits don't either.
This will be the year when a lot of it sorts out.  In times of change, I say
find your natural allies and work together to keep the market and/or your
technology on track for whatever it is you mean to do with it. 

Me:  just building a prototype world for fun and illumination.  VRML97 still
works for that and I may move it on to X3D.  After building worlds for a
hobby for a long time now, I know that I want to be able to pick up a
project even if it is a decade old and finish it or recycle it.  For that I
need real standards and technology that keeps working.  For that, ISO is
gold.  They are slow but very predictable. 

Do what you do with enthusiasm and a deaf left ear.

len

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Peter Amstutz
Sent: Monday, January 08, 2007 2:29 PM
To: VOS Discussion
Subject: Re: [vos-d] SecondLife client goes Open Source

Interesting.  This reminds me of the .com mantra Get Big or Die -- 
which usually meant expanding quickly and burning through millions of 
dollars to try and capture 99% of a market that hasn't yet even been 
proven to be profitable.

In a couple years we'll be able to look back and figure out where Second 
Life is on launch parabola -- has it archived escape velocity and will 
be the next Amazon or Yahoo, or come crashing to earth when the fuel 
(stacks of crisp venture capital dollars) runs out?  I haven't seen much 
word-of-mouth promotion of Second Life, and what I have seen has been 
mostly negative (of course I'm biased here).  Rather there's been a lot 
of over-the-top hype and top-down marketing, rather than the sort of 
grass-roots support that suggests a sustainable platform.

They desparately want to make SL seem bigger than it is, because people 
like a winner.  But if the real numbers are right (250,000 accounts 
logged at least once in the last two months, 15,000 simultaneous users 
at peak usage) I can't help but think the user community is really, 
really small considering their multi-million dollar investment in 
hardware, software and marketing.

Also I agree that they're walking a fine line between the natural laws 
of cyberspace and real-world legal systems, and this could really burn 
them at some point down the road.  Whenever someone tries to bend 
cyberspace to conform to their idea of what should and shouldn't be 
allowed (as opposed to what is naturally possible or impossible) 
cyberspace ends up worse off for it.


On Mon, Jan 08, 2007 at 12:42:10PM -0600, Len Bullard wrote:
 Letting out the viewer is something of a SOP.  I think the server-side