Hi,

On Monday, June 18, 2012 13:20:17 castle...@comcast.net wrote:
> The HLA/RTI architecture is far more sophisticated than what might be
> needed. The idea is not to split FlightGear into a distributed, federated
> application across a multi-platform machine or network although that is an
> intriquing prospect for stimulating the brain cells. ;-)
While the interface is providing more than you need today, I think the major 
benefit is that it shields away a lot of synchronization stuff from you in a 
way 
that you can still programm younlocal component in a single threaded way. The 
coupling of components *can* be that tight so that you can get determinicsit 
time slicing for some of the components. Say you want to simulate glider 
towing you can do that with each fdm running in its own process while still 
deterministically exchanging simulation results at a the fdm's rate and st 
time step boundaries.
The same goes for components within the aircraft which I consider a possible 
use case for your kind of application.
In contrast to that, You can still run the viewers asynchronously to the 
simulatoin core providing hopefully 60 stable frames without being disturbed 
by synchonization needs of specific components.

So, you might have an Idea how to to that by ipc directly, and trust me I have 
considered this at some time. But what this standdard provides is really 
driven by exactly those problems that need to be solved once you dig into such 
kind of implementation. So one of the benefits is that you gain a encapsulated 
communication library that does what you need. This library can be tested 
independently of such an application beast like flightgear. And this is IMO a 
huge benefit.

> By way of an example, consider the 3D cloud system.
> 
> Given a three projector system, each CPU is configured in a similar manner
> as before for a multi-monitor system; i.e. there is a master FDM, the slave
> FDMs are disabled and each CPU is bound to a display. I don't recall the
> exact syntax but for those who have run multi-monitor display systems you
> understand. The doc files and readme's provide a good description of how to
> implement this configuration for those not familiar with the setup. The
> down side in this approach is that each CPU creates it's own graphics
> context and dynamic/random scenery objects are not sync'd. It has been a
> year or two since I last spent any time digging into or running FlightGear
> with master/slave machines. The current 737/747 sim runs on a single CPU
> with three projectors to make use of all the "eye-candy". But I believe the
> above assertion is still true.
> 
> In the case of the cloud system, something similar might be possible. Rather
> than using the network, we would use shared memory as the IPC. The master
> cloud generator creates the shared memory segment and manages the cloud
> objects. The slaves obtain the objects from the memory segment and render
> as required. They do NOT create their own objects. AI objects could be
> handled as well with this approach.
Yep, this is exactly what I want to do with the HLA stuff.

A weather module running in a different process/thread/machine that computes 
positions for clouds that are consistenly displayed on each attached viewer. 
That being a module that is exchangable. The simple version just interploates 
metars like today, but more sphisticated versions might do a local weather 
simulation to get good results for thermals in some small area.
The same goes for every component you can think of splitting out. A simple AI 
model does just the trick what it does today. But more sophisticated modules 
might contain an owhn fdm so that these machines really live in the same fluid 
that the weather module provides data for.
Note that the rti api already provides a subsriber model that ensures that ypu 
don't feed data to participants that dont' need that.
May ba a radar controller screen that can be attached there to see the 
machines in this word. But sure that radar screen is not interrested in flap 
animations for the aircraft ...

Or take the viewers, if you just exchange data by a shared memory segment, you 
are limited to a single machine. So that's nice for the 3-chanel thing you 
have. But I know of installs with 9 chanels I have been visiting some few time 
ago. They run flightgear on that beast by the way. Or I know of installs that 
currently run 14 or 16 chanels within a single view. For that reason I 
thought: better have a technology that is also extensible for this kind of 
installs instead of programming everything on top of something limited machine 
local.

> Each fgfs executable is still a monolithic process within the supporting CPU
> and would not require major surgery on the existing source outside of
> adding a shared memory instantiation. The question would be how to make it
> applicable for all platforms. Linux I can do, clueless for Mac and MS.
Also a benefit of that hla stuff.
Use this and it alredy works on all platforms.

> With Gene and Wayne's awesome work on collimated displays, have a proto
> version of the required warping code and mesh generation working ala Sol7 (
> btw I owe Mathias a response and some data and source, my bad, just too
> busy ATM ). Providing a basic mechanism to run with multi-core machines,
> support collimated display systems, and preserve all the great new features
> would greatly enhance Flightgear as a professional product. IMHO we need to
> keep the vision of FlightGear as a product that is attractive to
> professional organizations and keep it compatible with improving hardware
> and software technologies.
Ah ok, you got my mail. I got some sent mails back from your mail delivery 
agent at comcast which makes me belive that some filter at your site does not 
trust me. :-(

In the end I belive that we are both pulling at the same side.
My time is limitted too, so things do not go at the rate I want them to go. 
But past all that I have thought about the use cases an needs for 
communication that I can see I came to the conclusion that the rti abstracts 
away the communication stuff in a way that is highly matching exactly the needs 
of a distributed simulation. Where distributed is just the same if it's 
distributed across processes, threads or machines. That's the reason I did 
prepare the groundwork by starting that own project with OpenRTI. So this 
project is purely driven by distributing flightgear across more computation 
power.

Greetings

Mathias

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to