I've been trying to track down a bug that multiple people have experienced where clients running multiple bots and doing a lot of things at the same time will suddenly start using exponential amounts of memory until all of the system memory is used up under Mono on Linux. Don't have an answer for that yet but I started by profiling libsecondlife and found that the login sequence creates a ridiculous amount of Hashtable objects that hang around in memory. I've rewritten the login process to use built-in XML parsing in .NET instead of the XmlRpcCS library which should make memory usage a lot more efficient, but there is still a lot of data passed back (at least 56KB with a full request) that doesn't necessarily need to be passed back with every request. It also explains why the server responds to official clients faster than libsecondlife ones sometimes. The problem is that some of this data is necessary for certain parts of the code to work (such as inventory), but not all clients desire that functionality. My idea was to add booleans to the Settings class that will automatically request the needed data at login and also enable or disable that part of the code, so you could set Client.Settings.USE_INVENTORY = true; which would request root inventory UUIDs, inventory skeletons, and allow the InventoryManager to function.

What are your thoughts/ideas on how this could work?

The login rewrite is being tested now and should be in SVN by the end of the day. It shouldn't break anyone's code but Client.Network.LoginError has been marked obsolete and replaced by Network.LoginErrorKey (just contains the error reason such as "god", "key", "presence", or "libsl" if the error happened within the library) and Network.LoginMessage which will either be the descriptive error message or the message of the day depending on whether login succeeds or not.

John Hurliman
_______________________________________________
libsl-dev mailing list
libsl-dev@opensecondlife.org
http://opensecondlife.org/cgi-bin/mailman/listinfo/libsl-dev

Reply via email to