On 17/02/12 09:56, Wade Schuette wrote:
I imagine that, aside from coding language, the rest of the architecture and
database design are equally adhoc, were
great at the time as one of those "temporary solutions" that LL outgrew
rapidly. Are they running MySQL under the covers?
The asset server has clearly bogged down and probably the tables are incredibly
fragmented but I don't think they have
the ability to defrag them, or recognize the problem for what it is. Whatever
it is certainly didn't scale well or age
well, and clearly has no transaction control so things get lost routinely.
A monolithic non-distributed design, implemented on a cloud of servers, is an
astoundingly poor use of resources. The
whole busy/idle problem is as well, where 100 avatars can work fine one per
sim, but if they all come together that one
cpu stops while 99 cpu's are idle. Easily 95% of the computing power of the
server farm is wasted.
I think that a lot of these problems stem from simulating physics server side. However, it's very difficult to work
around this in a system with 3D content generated by users in real-time. Without real-time 3D UGC or server-side
physics it's a whole lot easier, which I think is why you see MMORPGs support many more people over worlds that don't
have fixed spatial distribution of cpu power. You can chuck a whole shard on a really powerful server and let it get on
with it.
The question is whether real-time 3D UGC is essential for a virtual world (as opposed to a game or a more limited
virtual environment). I would say that it is.
The "silos with messaging" approach to growth also results in the total chaos
when anyone or anything simply attempts to
move from one sim across a boundary to the next sim.
Still, all of the above problems could be fixed and redesigned away without
having to break anything at the user's level.
I think their largest constraint on growth IS somewhat more deeply embedded in
code, which is their data structures for
"objects" that have only a single level of linking. Once you link those "wheels" to the
"car" there are no "wheel"
objects any more, and God help you if you want to change the tires.
I do find that a curious decision. I can imagine there might be some form of trade-off in terms of a simpler UI. But
it's probably not worth it in the end.
However, I would have thought that you could slot multi-level linking into the existing system. All existing linksets
just end up having one level of link.
Also the way the code is implemented discourages building with distributed
intelligence among the parts, and encourages
monolithic scripts that run everything from the root prim. More than once I've
tried to do a clean distributed
intelligence object and given up and gone back to central scripts.
Yes LSL, is really, really horrible for this kind of thing. If I have to write something really complex I would use a
region module instead (on the OpenSim side, at least).
Overall, I suspect that, as always, "the work of the hands reflects the state of the
heart." Their management style
involves silos of teams that may message each other but don't cross boundaries
well, with massive central control that
limits creativity and makes changes have to be prohibitively huge and staged
instead of incremental and continuous.
As near as I can tell the whole architecture is on "milking status" with
effective freeze on putting money into fixing
things such as the Marketplace, which is clearly in a different silo than the
developers.
I'm not sure I would agree. The new pathfinding stuff, for instance (which must have a big eye on the virtual pet
market) is particular interesting. It reminds me of stuff like Creatures where simple mechanisms can allow emergent
behaviour to come out.
It's like a piston-driven internal combustion engine -- which is way better
than the horse drawn carts before it, but
now that the market has been developed, is seen to have no change of being
"tweaked" to match the new turbine-engined
designs of next year with true distributed intelligence and scalable growth
without performance disaster.
In fact, if a single thing defines their limits, it's an architecture where,
the more servers are added to the mix, the
HARDER it becomes to operate at any kind of reasonable speed or accuracy. Their
help desk is massively overwhelmed
already and must pray for the number of users to stop growing. One can imagine
an architecture with the opposite
property, where every new user and server chips in a little more capacity and
actually increases performance and ability
to do self-healing quality control.
I think this is one of the most interesting things about Hypergrid by allowing virtual environments to spread to
multiple operators yet retain navigation between them. It's a much more web-like approach when no one entity is in
overall control.
As such, it doesn't suffer the scaling or business vulnerability problems that come with a centralized system. And it
allows a lot more experimentation to go on in social, business and technical terms. I think that this whole space is
still so terra incognita that nobody knows what it's good for yet and the more experimental freedom we have the more
likely we are to find the answer(s).
However, I think that one of the challenges (apart from security) is whether the data standards between those systems
(e.g. avatar transfer, avatar attachments, object transfer, scripting) can evolve as time goes on, much like HTTP and
HTML were deliberately designed to be evolvable. It's enough of a pita to create the hundreds of different user/login
credentials on different web-systems today, let alone create multiple avatar appearances in different virtual
environments. And if you throw inventory into the mix (which is another kettle of fish) the barrier to creating more
than one virtual presence are really high.
The other problem is one of ecosystem sustainability. Linden Lab has a business model which works (albeit apparently
less so nowadays). If one gives away server (and client!) software for free then the development of these has to be
funded in other ways. I don't believe that volunteer hacking is enough.
Wade
On 2/17/12 1:18 AM, Toni Alatalo wrote:
On Feb 17, 2012, at 9:00 AM, Drew Hart wrote:
money. The whole world is built on *old, inefficient code*, and if Linden tries
to update it those virtual objects
can break, triggering massive backlash from buyers and sellers." (Emphasis mine)
I am just curious - is this statement true? Is it true of Open Sim? I feel like
it's not true, but I am curious for
comment. And are we sacrificing quality to ensure backwards compatibility? I
guess this is a philosophy
I'd dare to say: yes. With some reservations.
Rationale: for example LSL itself, at least the current implementations of it,
are AFAIK relatively inefficient. Not
to mention not the greatest nor best known language around, with third party
libraries etc. The LLUDP protocol is
another problem point, but I'll focus on the scripting here as that's what your
post seemed to refer to.
If you compare LSL with a completely from the scratch approach, where you would
drop all concerns for backwards
compatibility, you could use either Javascript and the powerful optimized V8
engine for it (used in Chrome and in many
places that embed js now) or for example Lua which has gotten really popular in
games, and is fast and light.
The reservations: I'm sure both SL and Opensim backends have done good things
to optimize things e.g. in the script
engines. Linden has been working on their viewer too etc. Usually it is
possible to optimize, clean up implementations
etc. while still keeping backwards compatibility. I don't mean to belittle that
work nor say that it would be
impossible. There might be some weird things with LSL that prevent some cleanup
/ optimization for backwards
compatibility reasons but I'd guess those points are rare.
Anyhow my bet is that LSL will never beat V8, with the huge Google effort, nor
Lua with the nice clean design that
also allows great speed (with LuaJIT2) , in quality -- considering both the
niceness of the langs and the efficiency
of execution.
C# scripting for SL seemed promising in Babbage's demo and that would be plenty
nice and fast, though. And with
Opensim you get that efficiency by writing region modules.
In realXtend with the Tundra SDK we've been now pursuing the approach where
dropped most our the legacy (slviewer and
opensim) alltogether, compatibility as well. So there at least you have
something to compare with: a nice clean
efficient system, but with no SL compatibility. If someone is interested we can
do benchmarks, just tell what to test
& we'll report :) We currently use JS for apps (not V8 now though but there's a
branch of qtscript with which we can
get that) and may test Lua too. My wish is that we are still a humble part of
the opensim community, even though use
different technologies -- alternative tools that suite different purposes are
good to have around.
And the fact that all you out there in the big world use Opensim happily and
can't e.g. switch to Tundra is a perfect
example why backwards compatibility is a big deal :) We here just have often
cases where legacy doesn't matter, some
new game or customer project where need to make a custom app, perhaps with no
SL like functionality at all, so in
those cases it's not a prob and we can pursue this route.
Drew
2cently yours,
~Toni
http://techcrunch.com/2012/02/16/littletextpeople/
_______________________________________________
Opensim-users mailing list
[email protected] <mailto:[email protected]>
https://lists.berlios.de/mailman/listinfo/opensim-users
_______________________________________________
Opensim-users mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-users
_______________________________________________
Opensim-users mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-users
--
Justin Clark-Casey (justincc)
http://justincc.org/blog
http://twitter.com/justincc
_______________________________________________
Opensim-users mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-users