On 01/05/2012 03:52 PM, Tomas Hlavaty wrote:
I'm implementing
<http://wiki.services.openoffice.org/wiki/Uno/Remote/Specifications/Uno_Remote_Protocol#Object_Life_Cycle>
and can't make much sense of it.  It seems to me that the spec is
contradictory:

   ...  unless it considers as bridged in any tuple<o, t'>, where t' is
   a subtype of t (including t itself).  If the same tuple appears
   multiple times in the data of a message, the corresponding reference
   count is incremented multiple times.

   ...

   The optimization rule (to not increment the reference count for<o, t>
   when<o, t>  itself or some subtype tuple<o, t'>  is considered as
   bridged in) is broken...

The last quoted paragraph:

   to not increment the reference count for<o, t>  when<o, t>  itself or
   some subtype tuple<o, t'>  is considered as bridged in

doesn't sound like reference counting.  If the client fetches XInterface
first, then the reference count can only ever be maximum 1.  It somehow
seems very dependent on what types the client fetches in what order.
Also this rule contradicts the sentense:

   If the same tuple appears multiple times in the data of a message, the
   corresponding reference count is incremented multiple times.

No, the sending side increments its ref count multiple times for a given tuple it sends, unless it earlier received that tuple from the other side (i.e., it is "bridged in" at this side), in which case it does not increment the ref count at all.

Is this spec still valid?

Yes.

I implemented the algorithm according to my understanding of the above
spec reducing memory leaks by 1/2 but I still get many leaks.  If I'm
strict and ignore the broken optimization rule, I get LO crashing after
some time, likely because of double release.  I'm still missing
something to get refcounting exactly the way LO does it.

This is tricky shit, indeed.

Why is the reference counting algorithm dependent on the casted type in
the first place?  Shouldn't the reference count be interesting only in
connection with oid and not<oid,type>?

This is due to the design decision that an object can be revealed across a bridge piecemeal: If the remote side first only requests XInterface, it creates a proxy only for XInterface. If it later also requests some derived XFoo, it creates an additional proxy for XFoo. (If it then requests some intermediate XBar from within the hierarchy chain XInterface - XBar - XFoo, the existing XFoo proxy will already take care of that.) There are individual ref counts for the different proxies, in order to be able to release individual proxies as early as possible. But this design is not necessarily the best one, indeed. (And having multiple proxies representing the same UNO object only works because UNO object identity is checked with special functionality in the various language bindings, e.g., not with plain == in Java UNO. Another questionable design decision.)

UNO is a distributed protocol.  The links should be considered
unreliable.  Is there a mechanism that when a link between the server
and client bridge breaks, the server releases the resources properly, or
do we get/expect memory leaks?

In some sense this is a QoI issue. But existing URP endpoints (binary and Java) do note if a connection is broken and locally release the objects they have bridged out across that connection.

Also, it is not exactly clear at which point in time the release message
should be sent.  One such point in time could be when the client is
finished with the session.  At that point, the client needs to send at
least as many release messages as the number of all the incremented
refcounts it noticed according to this algorithm.  That is potentially
many messages, slowing down the session considerably.  Is there a way to
simply end the session and declare all references not used anymore in
one go/message without causing leaks in the server?

Release messages should generally be sent as early as possible, so that the remote side can clean up garbage as early as possible. (With the above said, just lowering a connection without releasing references before should actually also happen to work, even if it is not good style to defer release messages unduly.)

Thank you,

Good luck!

Stephan
_______________________________________________
LibreOffice mailing list
LibreOffice@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/libreoffice

Reply via email to