The design goal in implementing loop detection is to detect and stop
infinite routing loops quickly.  (The Max-Forwards mechanism will always
terminate them eventually.)  The implicit complementary design goal is
to never signal "Loop Detected" on a request that would succeed
otherwise.  It's easy enough to say "Nobody will ever be bothered by [a
particular heuristic rule]!", but the history of computer science is
littered with systems that were unusable because their rules were not
simple, general, and uniform.  ("Why would anyone want to multiply
something by zero?  They already know the answer is zero!")

The problem with loop detection is that it is difficult to construct a
heuristic that will successfully detect any significant number of loops
without blocking some legitimate non-loop request.  With a bit of
imagination, one can construct plausible scenarios where almost any
header is significant for routing decisions.  Perhaps the only headers
which one can be sure will not (should not?) affect routing are Via and
Max-Forwards themselves.  And since one proxy does not have global
knowledge of other proxies, it has to make the most pessimistic
assumptions about what aspects of a request affect its routing.

Conversely, it is difficult to construct a heuristic that will notice
some simple errors.  Any mapping loop that turns a URI into a longer URI
is likely to missed by a heuristic that does not examine user-parts for
suspicious similarities.  But knowing when there is a loop of that sort
is difficult, since even substring-match-and-replace is computationally
complete ("Post productions").

Vijay K. Gurbani writes:
> A proxy, when it is presented with a request,
> ought to know whether or not the same form of this request has
> made its way through this proxy before (and Step 8 of Section
> 16.6 in rfc3261 provides some guidance).

True.  But the question is whether the presence of this request at this
proxy signals that the request will never succeed, or is just part of a
complex but functional routing scheme.

> > Particularly problematic is when a request passes through a proxy B 
> > which spirals the request back to an entry router A.  That is,
> > "source -> A -> B -> A -> B -> destination".  B may easily make a
> > change that it knows will cause itself to route the request
> > differently when spiraled, but A will reject it as a loop.
>
> I am curious, is there a real world example of this or are we
> engaging in over-design?

sipX will tend to act like this, as any request that it forwards through
a rewrite rule will be routed based on its URI, which may easily spiral.
But the question isn't whether an example exists now, but rather will
someone setting up a complex forwarding in the future run into it?

> > <name>@<domain> -> <name>A@<domain> <name>@<domain> ->
> > <name>B@<domain>
> > 
> > That will generate an exponential branching without using any
> > Request- URI twice:
>
> Agreed.  But then the production rule is in error, right?  That
> is, fooA <> fooAA <> fooAB <> fooB <> fooBA <> fooBB.  So this is
> a legitimate spiral.  The substitution rules for the above forwarding
> ought to be made more stringent...no?

What restrictions do you think the proxy should place on the
substitution rules so as to avoid this situation?  (Since anything that
the proxy does not forbid will be accidentally used some day.)

The position I am arguing for is that the only workable mechanism for
detecting and stopping loops is Max-Forwards -- any other rule will be
too restrictive and real-world applications will soon be hobbled by it.
(Because what the users will demand is Turing complete.)

Of course, this doesn't stop the exponential branching problem.  That
requires a different mechanism.

Dale


_______________________________________________
Sip-implementors mailing list
[email protected]
http://lists.cs.columbia.edu/mailman/listinfo/sip-implementors

Reply via email to