[
https://issues.apache.org/jira/browse/AXIS2C-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Lazarski resolved AXIS2C-1001.
-------------------------------------
Fix Version/s: 2.0.0
(was: 1.7.0)
Resolution: Fixed
Root Cause: The async non-blocking worker thread was accessing op_client via
args_list->op_client after it could be freed by the main thread, causing
use-after-free crashes when:
1. Rapid async calls: When making a second async call while the first is
still in progress, svc_client frees the old op_client (at svc_client.c:932)
before the worker thread finishes
2. Service client cleanup: When freeing the service client while an async
operation is running
The Fix (commit c6b49b66a):
1. Added svc_ctx to the worker thread args structure to store a copy of the
service context pointer
2. Changed the worker function to use args_list->svc_ctx instead of
dereferencing args_list->op_client->svc_ctx
3. Removed the unsafe call to
axis2_op_client_add_msg_ctx(args_list->op_client, ...) which was:
- Unsafe (accessing potentially freed memory)
- Redundant (response is delivered via callback)
Files modified: src/core/clientapi/op_client.c
This fix prevents the use-after-free that occurred in the original bug report
where firing multiple non-blocking web service calls rapidly would "always
cause failure on the second call."
> Axis2/C Stress tests fail
> -------------------------
>
> Key: AXIS2C-1001
> URL: https://issues.apache.org/jira/browse/AXIS2C-1001
> Project: Axis2-C
> Issue Type: Bug
> Components: tests
> Environment: Using an Axis2/C snapshot from 7/2/2008 compiled with
> Guththila enabled (due to problems with libxml)
> Reporter: Frank Huebbers
> Priority: Major
> Fix For: 2.0.0
>
>
> I am using Axis2/C in an application which uses asynchronous web service
> calls heavily. In this application, it can happen that several web service
> calls are issued one after another. In my tests, however, I have noticed that
> this almost always causes problems with Axis.
> I was able to narrow down the problem to the following test cases:
> 1. Fire several non-blocking web service calls right after each other
> (without waiting for a response).
> ==> Always causes failure on the second call
> 2. Fire 100 web service calls one after the other (i.e., fire one, wait for
> response, fire next)
> ==> Causes failure randomly
> Now, it's important to note that I am reusing the axis environment and stub
> for these calls.
> In my next series of tests, I repeated the two set-ups above. Instead of
> reusing the stub for each call, however, I created a new stub for each call.
> In my test, this allowed me to pass the two tests above. However, in this set
> of tests I did not clean up the stubs, which, of course, creates an
> unacceptable memory leak.
> So, the next series of tests included cleaning up the stub right at the end
> of the on_complete/on_failure callbacks. This, however, caused crashes in
> the axutil library. Specifically, the line of code which caused the crash was
> (in op_client.c):
> axis2_async_result_free(args_list->op_client->async_result, th_env);
> With further investigation I was able to figure out that this caused an error
> because in the deletion of the stub (which in my test above would happen
> before the async_result_free above would execute) the op_client variable was
> freed.
> So, in my final set of tests, I created a stub resource manager which,
> essentially, frees the stubs in a time delayed fashion. This would allow the
> thread to complete the on_complete/on_failure callback and clean up after
> itself and then do a freeing of the stub. This seems to work very reliable
> for me, but, as I understand it, is not the most efficient way of doing
> things as I am required to create a new stub for every web service call.
> So, I was wondering if these scenarios are tested in the Axis2/C regression
> tests and/or if I can do something else to get my test cases working.
> Frank
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]