Hi,

An update on our work to get TIPC  to function on both the
control network and the data network in an ATCA system.

As you may recall from previous emails, two approaches were
being considered - 1. "tipc routing" and 2. tipc virtualization.


Ravi (cc'd) and I have been looking at and prototyping both these ideas.

A summary of work in progress.

1. Routing.

The tipc routing approach seems doable by adding:
 - a special link for the data network,
 - conditional routing in a half-dozen or more sections of the tipc code.
For sends that have access to the destination address
this is very easy, we just route a range of tipc.
The slight challenge is mainly with respect to dealing with
sends that do not have access to the destination address
(connection-oriented sens
and server replies) but rather use an internal tipc object reference
(ref.c). Again conceptually
it's easy to see that we could split the reference objects up into a
group that gets routed
over the data network and leave the rest as is. The issue right now is
understanding
exactly what the heck the ref.c code does. Since I'm not too keen on
this approach
Ravi and I haven't bothered to figure it out yet - I'm sure it
wouldn't take to long.
For multicast sending, we'd have to create another multicast link for
the data plane.

All of this work seems like at least as much work as virtualization...


2. Virtualization.

At first I thought that this would be more work than the Routing approach but
once I realized that for now I just need one other tipc stack not an
array on stacks
as I was originally planning. So our approach here is to use "nm" on the tipc.ko
binary to build up (and filtering) a list of function and data structure names
that will be run through a search and replace script that examines the entire
tipc source tree.  So for example, tipc_nameseq_create would become
vtipc_nameseq_create and so on.

The next step is to replace the tipc ethertype (0x88ca) with 0x88cb
or whatever ethertype we choose.  This seems rather wasteful and unlikely to
be generally adopted but it's simple and it will get us going.

Then we change AF_TIPC to AF_V_TIPC.

Finally we have to make the tipc-config tool talk to the right instance of
the tipc-stack probably by changing the netlink identifier and
generating a vtipc-config
binary.

If we add these scripts and our patch to the tipc code base, then we have a way
to generate a vtipc.ko stack on demand - code maintainance issues go away.

Ravi has done this work. He can load the vtipc.ko but get's a
collision as of yesterday
when he tries to load the original tipc.ko.

Comments welcome.

Some notes:
1. One could create an arbitrary number of stacks just by changing the prefix:
vtipc_nameseq_create --> v1_tipc_nameseq_create and so on. Ugly but for
small numbers of stacks, manageable.

2. It's a bit wasteful to replicate the tipc code function as well as
data structure names -
it's only the global and dynamic data that we need to virtualize but it's not
that much memory and it seems to be an easier approach.

3. I don't like the "tipc routing" label since we use that for
heirarchical tipc,
but I don't have a better name other than:
  control/data  plane routing or plane selection (TIPC C/DPR !).
That's a bit of a mouthful! ;-)

4. I haven't thought much yet about how this works with tipc-1.7.x.

I'm on a (delayed) vacation this week and maybe next so I might not
reply quickly.
Ravi is on this mailing list too, so hopefully he will answer
questions before me.

// Randy

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
tipc-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to