Yes, there is considerable overlap here, thanks for posting, Dahlia. Unfortunately I wasn't aware/had forgotten about this setting.

It is the case that MessageOnlineUsersOnly should be more efficient as it performs its analysis much higher up in the stack, so only needing one (cached) presence call. I wrote this code because of the input from Michelle and what I thought was reports of large-group messaging problems on OSgrid.

However, it turns out that the default for ForwardOfflineGroupMessages is true and people may simply not have set it to false. This has been the case on the OSGrid plazas, for instance. Michelle, have you tried making sure this setting is false? I believe the default should be false rather than true.

In principle, ForwardOfflineGroupMessages shouldn't actually have a direct impact since the offline storage is done on a separate thread for each user. But possibly in a very large group or in a situation where many groups are being messaged at once, many threads are being tied up for each message and perhaps this is having an effect on the runtime (this is speculation).

On 20/10/12 09:43, Dahlia Trimble wrote:
Justin, would that conflict with this?
http://opensimulator.org/viewgit/?a=commit&p=opensim&h=1e899704c1c19a8c42ff313677a13f35b46605da

On Fri, Oct 19, 2012 at 7:32 PM, Justin Clark-Casey <[email protected] 
<mailto:[email protected]>> wrote:

    Regarding the groups work, I have now implemented an OpenSimulator 
experimental option, MessageOnlineUsersOnly in
    [Groups] as of git master 1937e5f.  When set to true this will only send 
group IMs to online users.  This does not
    require a groups service update.  I believe OSGrid is going to test this 
more extensively soon though it appears to
    work fine on Wright Plaza.

    It's temporarily a little spammy on the console right now (what isn't!) 
with a debug message that says how many
    online users it is sending to and how long a send takes.

    Unlike Michelle's solution, this works by querying the Presence service for 
online users, though it also caches this
    data to avoid hitting the presence service too hard.

    Even though I implemented this, I'm not convinced that it's the best way to 
go - I think Michelle's approach of
    sending login/logoff status directly from simulator to groups service could 
still be better.  My chief concern with
    the groups approach is the potential inconsistency between online status 
stored there and in the Presence service.
      However, this could be a non-issue.  Need to give it more thought.


    On 14/10/12 22:53, Akira Sonoda wrote:

        IMHO finding out which group members are online and sending group 
IM/Notice etc. to them actually should not be
        done by
        the region server from which the group IM/notice etc. is sent.
        This is a task which should be done centrally in case of OSgrid in 
Dallas TX (
        http://wiki.osgrid.org/index.__php/Infrastructure 
<http://wiki.osgrid.org/index.php/Infrastructure> ). The
        region server should only collect the group IM/notice etc. and
        send it to the central group server or in the other way receiving 
IM/notice etc. from the central group server and
        distribute it to the Agents active on the region(s).


    That concentrates all distribution on a central point rather than spreading 
it amongst simulators.  Then OSGrid has
    the problem of scaling this up.

    Having said that, there are advantages to funnelling things through a 
reliable central point.  As to which is better
    is a complicated engineering issue - the kind of which there are many in 
the MMO/VW space.



        But there are even other places which can and should be improved. I did 
some tests with some viewers counting
        the web
        requests to the central infrastructure:

        Test 1: Teleport from a Plaza to one of my regions located on a server 
in Europe and afterwards logging out:

        Cool VL Viewer: 912 Requests mostly SynchronousRestForms POST 
http://presence.osgrid.org/__presence
        <http://presence.osgrid.org/presence> ( i guess to inform
        all my 809 friends [mostly only 5% online] I am going offline because 
the calls to the presence service were
        done after
        i closed the viewer)
        Singularity Veiwer: 921 Requests mostly calls to presence after logoff
        Teapot viewer: 910 Requests mostly calls to presence after logoff
        Astra Viewer: 917 Requests mostly calls to presence after logoff
        Firestorm: 1005 Requests mostly calls to presence after logoff
        Imprudence: 918 mostly calls to presence after logoff

        So far so good. I have no idea why my 760 offline friends have to be 
informed that I went offline ...
        (Details can be found here: 
https://docs.google.com/open?__id=__0B301xueh1kxdNG1wLWo2YVVfYjA
        <https://docs.google.com/open?id=0B301xueh1kxdNG1wLWo2YVVfYjA> )

        Test 2: Direct Login onto my Region and then Logoff-( with 
FetchInventory2 disabled )

        Cool VL Viewer: 2232 Requests mostly calls to presence ~800 during 
login and ~800 during logout and xinventory
        Singularity Viwer: 2340 Requests mostly calls to presence and xinventory
        Teapot Viewer: Produced 500+ Threads in a very short time and then the 
OpenSim.exe crashed
        Astra Viewer: 2831 Request mostly calls to presence and xinventory
        Firestorm Viwer: ACK Timeout for me. OpenSim.exe survived on 500 
Threads for 30+ minutes producing 4996 Requests
        mostly
        xinventory
        Imprudence: 1745 Requests mostly presence

        Again why do all my 809 friends have do be verified with single 
requests? Then why this difference in xinventory
        Requests? And why are both Teapot and Firestorm producing so many 
Threads in such a short time? and bring
        OpenSim.exe to
        crash or closely to crash ...
        ( Details can be found here: 
https://docs.google.com/open?__id=__0B301xueh1kxdMDJxWm5UR2QtU2c
        <https://docs.google.com/open?id=0B301xueh1kxdMDJxWm5UR2QtU2c> )


    The presence information is useful data and it was possible in git master 
commit da2b23f to change the Friends
    module to fetch all presence data in one call for status notification when 
a user goes on/offline, rather than make
    a separate call for each friend.

    This should be more efficient since only the latency and resources of one 
call is required.  However, since each
    friend still has to be messaged separately to tell them of the status 
change I'm not sure how much practical effect
    this will have.



        Test 3: Direct Login to my Region with FetchInventory2 enabled.

        Teapot Viewer: I closed the viwer after 30 minutes. Number of Threads 
were still rising up to 260. In the end i
        counted
        30634 xinventory requests... My Inventory has 14190 items !!!
        Firestorm Viwer: Quite normal approx 2020 Requests ... quite some slow 
FetchInventoryDescendandts2 Caps. with
        100 sec max


    Regarding inventory service, unfortunately many viewers appear to behave 
very aggressively when fetching inventory
    information.  For instance, I'm told that if you have certain types of AO 
enabled - some viewers will fetch your
    entire inventory.  The LL infrastructure may be able to cope with this but 
the more modest machines running grids
    can have trouble, it seems.

    I'm not sure what the long term solution is.  I suspect it's possible to 
greatly increase inventory fetch
    efficiency, possibly by some kind of call batching.  Or perhaps there's 
some viewer-side caching that OpenSimulator
    isn't working with properly.


        ( Details can be found here: 
https://docs.google.com/open?__id=__0B301xueh1kxdNEtEeUVFamU1QUE
        <https://docs.google.com/open?id=0B301xueh1kxdNEtEeUVFamU1QUE> )

        Just my observations this week end.
        Akira



        2012/10/13 Justin Clark-Casey <[email protected] 
<mailto:[email protected]>
        <mailto:jjustincc@googlemail.__com <mailto:[email protected]>>>


             Hi Michelle.  I've now had some more time to think about this.  In 
fact, I established a proposal summary
        page at
             [1] which I'll change as we go along (or please feel free to 
change yourself).  We do need to fix this
        problem of
             group IM taking massive time with groups that aren't that big.

             I do like the approach of caching online status (and login time) 
in the groups service.

             1.  It's reasonably simple.
             2.  One network call to fetch online group members per IM.
             3.  May allow messaging across multiple OpenSimulator 
installations.

             However, this approach does mean

             1.  Independently updating the groups services on each 
login/logout.  I'm not saying this is a problem,
        particularly
             if it saves traffic later on.
             2.  Groups service has to deal with extra information.  Again, 
this is fairly simple so not necessarily a fatal
             issue though it does mean every groups implementations needs to do 
this in some manner.
             3.  Online cache is not reusable by other services in the future.

             On a technical note, the XmlRpc groups module does in theory cache 
data for 30 seconds by default, so a
        change in
             online status may not be seen for upto 30 seconds.  I personally 
think that this is a reasonable tradeoff.

             Rather, of the above cons, 3 is the one I'm finding most serious.  
If other services would also benefit
        from online
             status caching in the future, they would have to implement their 
own caches (and be updated from simulators).

             I do agree that making a GridUser.LoggedIn() call for every single 
group member on every single IM is
        unworkable.
               Even if this is only done once and cached for a certain period 
of time it could be a major issue for
        large groups.

             So an alternative approach could be to add a new call to GridUser 
service (maybe LoggedIn(List<UUID>) that
        will only
             return GridInfo for those that are logged in.  This could then be 
cached simulator-side for a certain
        period of time
             (e.g. 30 seconds like the groups information) and used for group 
IM.

             This has the advantages that

             1.  Groups and future services don't need to do their own login 
caching.
             2.  Future services can use the same information and code rather 
than have to cache login information
        themselves.

             However, it does

             1.  Require GridUserInfo caching simulator-side, I would judge 
this to be a more complex approach.
             2.  Mean that during the cache period, new online group messages 
will not receive messages.  (this is going to
             happen with GetGroupMembers() caching anyway).
             3.  Traffic is still generated to the GridUser service at the end 
of every simulator-side caching period.
          This is
             probably not a huge burden.

             So right now, I'm somewhat more in favour of a GridUserInfo 
simulator-side caching approach than caching login
             information within the groups service.  However, unlike you, I 
haven't actually tried to implement this
        approach so
             there may well be issues that I haven't seen.

             What do you think, Michelle (or anybody else)?


             On 10/10/12 19:47, Michelle Argus wrote:

        http://code.google.com/p/____flotsam/ 
<http://code.google.com/p/__flotsam/> <http://code.google.com/p/__flotsam/
        <http://code.google.com/p/flotsam/>> is the the current flotsam version 
and

                 points to the github repro which I forked and
                 then patched.

                 None of the changes I proposed in my git fork have been 
implemented, neither in opensim nor in flotsam.

                    Consider my proposal as a quick fix for the time beeing 
which does not solve all other issues
        mentioned by later
                 mailings.

                 Am 09.10.2012 10:24, schrieb Ai Austin:

                     Michelle Argus on Wed Oct 3 18:00:23 CEST 2012:

                         I have added some changes to the group module of 
OpenSim and the flotsam server.
                         ...
                         The changes can be found in the 2 gits here: 
<https://github.com/__MAReantals__
        <https://github.com/MAReantals__>>https://github.__com/MAReantals 
<https://github.com/MAReantals>


                         NB: Both changes to flotsam and opensim are backward 
compatible and do
                         not require that both parts are updated. If some 
simulators are not
                         updated it can happen that some groupmembers do not 
receive
                         groupmessages as their online status is not updated 
correctly. In a grid
                         like OSgrid my recomendation would thus be to first 
update the
                         simulators and at a later stage flotsam.


                     Hi Michelle... I am looking at what is needed to update 
the Openvue grid which is using the flotsam
        XmlRpcGroups
                     module.  the GITHub repository has the changes from a few 
days ago... but I wonder if there has been an
                     update/commit
                     into the main Opensim Github area already.  I cannot see a 
 related commit looking back over the
        last week
                     or so.  Is
                     the core system updated so this module is up to date in 
that?  I also note that the
        Opensim.ini.example file
                     contains
                     a reference to http://code.google.com/p/____flotsam/ 
<http://code.google.com/p/__flotsam/>
        <http://code.google.com/p/__flotsam/ 
<http://code.google.com/p/flotsam/>> for details of how to

                     install the service.. but that seems to be
                     pointing at an out of date version?

                     I think for the flotsam php end it is straightforward and 
I obtained the changed groups.sql and
        xmlrpc.php files
                     needed.  But note that people are still pointed via the 
opensim.ini.example comments at the old
        version on
        http://code.google.com/p/____flotsam/ 
<http://code.google.com/p/__flotsam/> <http://code.google.com/p/__flotsam/
        <http://code.google.com/p/flotsam/>> so that either needs updating to 
teh

                     latest version, or the comment in
                     opensim.ini.exmaple needs to be changed.

                     To avoid mistakes, I wonder if you can clarify where to go 
for the parts needed and at what
        revision/date of
                     OpenSim
                     0.7.5 dev master this was introduced, what to get and what 
to change for an existing service in
        terms of the
                     data base
                     tables, OpenSim.exe instance and the web support php code 
areas?

                     Thanks Michelle, Ai

                     ___________________________________________________
                     Opensim-dev mailing list
        [email protected] <mailto:[email protected]> 
<mailto:Opensim-dev@lists.__berlios.de
        <mailto:[email protected]>>
        https://lists.berlios.de/____mailman/listinfo/opensim-dev
        <https://lists.berlios.de/__mailman/listinfo/opensim-dev>
        <https://lists.berlios.de/__mailman/listinfo/opensim-dev 
<https://lists.berlios.de/mailman/listinfo/opensim-dev>>


                 ___________________________________________________
                 Opensim-dev mailing list
        [email protected] <mailto:[email protected]> 
<mailto:Opensim-dev@lists.__berlios.de
        <mailto:[email protected]>>
        https://lists.berlios.de/____mailman/listinfo/opensim-dev
        <https://lists.berlios.de/__mailman/listinfo/opensim-dev>
        <https://lists.berlios.de/__mailman/listinfo/opensim-dev 
<https://lists.berlios.de/mailman/listinfo/opensim-dev>>




             --
             Justin Clark-Casey (justincc)
             OSVW Consulting
        http://justincc.org
        http://twitter.com/justincc
             ___________________________________________________
             Opensim-dev mailing list
        [email protected] <mailto:[email protected]> 
<mailto:Opensim-dev@lists.__berlios.de
        <mailto:[email protected]>>
        https://lists.berlios.de/____mailman/listinfo/opensim-dev
        <https://lists.berlios.de/__mailman/listinfo/opensim-dev>
        <https://lists.berlios.de/__mailman/listinfo/opensim-dev 
<https://lists.berlios.de/mailman/listinfo/opensim-dev>>





        _________________________________________________
        Opensim-dev mailing list
        [email protected] <mailto:[email protected]>
        https://lists.berlios.de/__mailman/listinfo/opensim-dev 
<https://lists.berlios.de/mailman/listinfo/opensim-dev>



    --
    Justin Clark-Casey (justincc)
    OSVW Consulting
    http://justincc.org
    http://twitter.com/justincc
    _________________________________________________
    Opensim-dev mailing list
    [email protected] <mailto:[email protected]>
    https://lists.berlios.de/__mailman/listinfo/opensim-dev 
<https://lists.berlios.de/mailman/listinfo/opensim-dev>




_______________________________________________
Opensim-dev mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-dev



--
Justin Clark-Casey (justincc)
OSVW Consulting
http://justincc.org
http://twitter.com/justincc
_______________________________________________
Opensim-dev mailing list
[email protected]
https://lists.berlios.de/mailman/listinfo/opensim-dev

Reply via email to