Jesse,

On 08/18/11 05:58 PM, Jesse Butler wrote:
Hi William,

When testing for a return from getResponse(), would it be better to save off 
the return locally so that it could be used, rather than calling it again? For 
example:

        response = query.getResponse()
        if response is not None:
                return response[0][0]
It seems either wasteful to call it twice,
Looking inside getResponse(), AI_database.py:getResponse() does little more than returning a object element that is a list, so I composed this test:
class a(object):
    def __init__(self):
        self._ans = [1, 2, 3]
    def getResponse(self):
        return self._ans
    def hasResponse(self):
        return self._ans is not None

def myway(ans):
    if ans.getResponse():
        return ans.getResponse()
def testitfirst(ans):
    if ans.hasResponse():
        return ans.getResponse()
def jessesway(ans):
    b = ans.getResponse()
    if b:
        return b
ans = a()
for i in range(1000000):
    #myway(ans)
    #testitfirst(ans)
    jessesway(ans)
My way:
real    0m1.33s
user    0m1.29s
sys    0m0.02s
Adding test to see if it has a value before assigning:
real    0m1.28s
user    0m1.24s
sys    0m0.02s
Just doing assignment:
real    0m1.00s
user    0m0.95s
sys    0m0.02s

So doing the assignment is faster.
or even possible that you might end up with the same situation (though far less 
likely), since you check it, then call it again.
No, since getResponse() doesn't actually repeat the query, just looks at a list 
that has already been assigned.

Changing to do the assignment.
Thanks,
William

Best
Jesse

On Aug 18, 2011, at 7:51 AM, schumann william wrote:

https://cr.opensolaris.org/action/browse/caiman/wmsch/7073935/webrev/

The stress errors are due to database lookups on manifests and profiles that 
have been deleted during 'installadm list -p/-m'.  For example, in 
AI_database.py:getNames(), the number of names (manifest or profile entries) is 
counted, then database queries are launched one at a time (a memory usage 
limitation feature for very large queries), which introduces a window during 
which the entry can be deleted, and the corresponding query fails.

The solution is to add checks that the rows exist in the query output before 
referencing them.

Stress-tested with this script and variations, looping, using 5 different 
profiles at a time, then launching the same script multiple times to compound 
the contention:

import sys
import subprocess
mysys = sys.argv[1]  # profile
mysvc = '167nightly'
mycp = ['installadm', 'create-profile', '-p', mysys, '-n', mysvc, '-f', mysys]
myls = ['installadm', 'list', '-p', '-n', mysvc]
mydp = ['installadm', 'delete-profile', '-p', mysys, '-n', mysvc]
subprocess.call(mycp)
subprocess.call(myls)
subprocess.call(mydp)

Calling code was checked for compatibility.

Related pyunit tests pass.
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to