Not that I have allot of standing to voice my thoughts in the matter, but I 
would like to see some detailed stats and some analysis on other things besides 
crashes.  Freezes, disconnects, stalls, and memory usage come to mind.  In fact 
more then just analysis, I would love to see some cutting and pasting of 
differant versions of relevant codes from different viewers that had wildly 
different behaviors doing the same tasks....

For those that need an example to illustrate my point or understand my 
reasoning I include an example below.  For the rest, I will simply say that I 
think it would be both enlightening and productive to play Frankenstein's 
Creator with the various viewer codes...
 


example:
Lets say for some reason a particular build of firestorm is more stable then 
the snowstorm build it was based off of, but that it tends to both freeze up 
more and use more memory.  (This actually happens to have been true for some 
builds.)  Now lets say that a mythological viewer we will call Bluestorm comes 
along built of that same snowstorm build which uses less memory then either 
firestorm or snowstorm, has slightly lower freezing then firestorm but more 
then snowstorm, and crashes more then both but almost never stalls or 
disconnects.
Lets go on to say hypothetically, a study showed that bluestorm maybe had less 
stalls or disconnects because it had a lower rate of failure because of how the 
viewer side code handled some part of group or inventory calls or something but 
that a small bug in this same code was responsible for the higher crash rates 
but firestorm already had the fix for that.  Maybe this study also could show 
that the memory usage of Snowstorm and Firestorm revealed that some of the 
insane memory usage tended to happen when people maximized the fifth group or 
individual im window after receiving 100+ new lines to minimized conversations 
and something in Bluestorm was changed so that didn't happen.

I think some real valuable data could come from taking a month or two and 
looking at what causes the differences in all of these areas of performance and 
possibly doing some swapouts of the various versions of the relevant code 
chunks and seeing what works best.  I bet a lot of wildly different things come 
about because of little corrections people don't even think about as well as 
just as a result of preparing code for integration of other features. Maybe 
those features might not need to be actually integrated for some of those 
changes to be helpful.  

Date: Tue, 9 Aug 2011 17:10:54 -0400
From: lee.po...@gmail.com
To: s...@lindenlab.com
CC: opensource-dev@lists.secondlife.com
Subject: Re: [opensource-dev] Collecting DIDN'T CRASH data

Good to hear.  But I was also thinking of the graphics settings.  There are all 
those proven and unproven theories about VBO on, but not with AA, and only on 
alternate Tuesdays....I know I have some of them myself.  I have convinced 
myself that VBO is bad on my ATI 2600Pro iMac.



On Tue, Aug 9, 2011 at 4:29 PM, Brian McGroarty <s...@lindenlab.com> wrote:


Data is collected about viewer sessions that end in a successful logout, and 
it's examined in much the manner you propose. I believe the aggregate data is 
shared with Third Party Viewer teams and/or at the Open Source office hours. 
I'm not sure if the lab has shared a recent breakdown against different OSes 
and graphic chipsets, but that could be a good thing to ask for if anyone's 
focussing on graphics or stability.



-- 
Brian McGroarty | Linden Lab
Sent from my Newton MP2100 via acoustic coupler




_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges  
                                  
_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to