I'm curious how a script would be expected to recover from a 'flurry of
502/503' errors? If my HTTP enabled object is expecting to communicate with
my external-to-SL server, and instead receives an 503 error, all I could
see doing is to retry. So is hammering the server with retries really going
to improve sim conditions?

Would this be expected when communicating with external-to-SL services, or
is this more likely to occur when a scripted service is trying to
communicate with another scripted service within the Grid?

As well I am not sure why scripts are being included in this at all. The
HTTP issues as I understood were the number of http connections a viewer
had to handle, with another factor in that mix being the particular router
the resident uses and how well/badly it handled multiple HTTP connections.
Service to service communcations such as a script to an external server or
a script to another script shouldn't be a contributing factor in these
viewer connection problems. The connections should never be hitting any
viewer.

                                                       - Dari


On Thu, Mar 14, 2013 at 5:13 PM, Monty Brandenberg <mo...@lindenlab.com>wrote:

>
> We now have three channels and a number of regions available for testing:
>
>
>    - DRTSIM-203.  Normal release intended to go to Agni supporting
>    keepalive connections and other changes.  Regions:
>       - TextureTest2.  High texture count, no meshes, low residency limit
>       to prevent interference when doing benchmark testing.
>       - (Coming soon)  MeshTest2.  High mesh count (many distinct mesh
>       assets which all look the same), few textures.  Low residency limit to
>       prevent interference when doing benchmark testing.
>        - DRTSIM-203H.  Our 'Happy' channel with more generous limits and
>    optimistic service controls.
>       - TextureTest2H.  Identical to TextureTest.
>       - (Coming soon)  MeshTest2H.  Identical to MeshTest2
>        - DRTSIM-203A.  Our 'Angry' channel with stricter and preemptive
>    enforcement of limits (generates many 503 errors).
>       - TextureTest2A.  Identical to TextureTest.
>       - (Coming soon)  MeshTest2A.  Identical to MeshTest2
>
>
> Test regions share object and texture references so if you are trying to
> measure download rates or really exercise the network, you'll need to
> defeat caching.  Typically with a restart and manual cache clear or your
> own preferred method.  Aditi is also hosting some of the server-side baking
> work and you may not get a reliable avatar appearance unless you use a
> Sunshine project viewer.
>
> What we're looking for on these channels:
>
> DRTSIM-203.  HTTP issues generally.  HTTP texture download reliability and
> throughput.  Script writers using *llRequestURL* and 
> *llRequestSecureURL*should try to get an A/B comparison going between a 
> normal 'Second Life
> Server' region on Aditi and DRTSIM-203.  Particularly with competing
> traffic like large texture or mesh downloads.  Scripts aren't getting a
> boost with this work but they shouldn't be adversely impacted.  Mesh also
> isn't getting a boost this time but, again, negative impact should be
> avoided.  Third-party viewer developers should test for overall
> compatibility with all HTTP services.
>
> We're interested in reports of regressions in any areas.  We *are*
> expecting more 503 errors (0x01f70001) in log files as it will be possible
> to push requests faster than before and certain throttles will be hit.  As
> long as these are recoverable, they're not a regression, they're an
> indicator of better utilization.
>
> DRTSIM-203H (Happy).  Scripts and mesh do get a boost here and other
> limits are generally raised.  This may increase the tendency to get 503 and
> 502 (0x01f60001) errors in some areas.  Again, these aren't regressions as
> long as they're recoverable.  Subjective and objective comments on Mesh and
> scripting behavior are sought here.
>
> DRTSIM-203A (Angry).  This channel deliberately restricts resources and
> uses a punitive enforcement policy that should result in a storm of 503
> errors.  Viewers are expected to recover from these.  Scripters can use
> this to test against a (reliably unreliable?) grid to see if they're
> handling recovery well.  A higher error rate and lower throughput and
> availability are expected here.  What is being tested is viewer and script
> robustness in the face of constraints.  A more rigid enforcement policy, if
> tolerated by external software, might actually allow us to set higher
> limits if we can pull back when required.
>
>
>
>
>
>
> _______________________________________________
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
>
_______________________________________________
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Reply via email to