[ 
https://issues.apache.org/jira/browse/AXIS2-4880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dan Armstrong updated AXIS2-4880:
---------------------------------

    Description: 
First, ADB's XMLStreamReader implementation becomes much slower when many 
complex types exist in the TypeTable.  This causes many repeated interations 
through the entire map for building the ADBNamespaceContext.  I have removed 
the necessity of this repeated iteration by:
  1) TypeTable generates any missing prefixes when new QName is added
  2) TypeTable maintains prefix->namespace and namespace->prefix mappings
  3) ADBNamespaceContext now checks in the following order (should be 
functionally equivalent to adding all complex types from TypeTable, but without 
the overhead):
    a) Use any added directly to our context
    b) Check TypeTable mappings (This is the key addition)
    c) Check parent NamespaceContext

Second, ADBNamespaceContext is fairly heavy weight in heap space.  I have 
minimized the heap space by:
  1) Delayed allocation of the internal ArraySet
  2) Using new AddOneNamespaceContext implementation when one and only one 
QName is added to the NamespaceContext.

In our system we currently have 407 complex types in the TypeTable and were 
suffering serious slowdown.  We have more types to add to this system so this 
problem needed to be addressed.  The results of my patches to ADB in Axis 1.5.2 
are:

    Times for a series of web services calls, retrieving various lengths of 
heterogeneous arrays of complex types:

      Before modifications:
          751.554 sec: Warm-up
          752.395 sec: Second pass

      After modifications:
          138.107 sec: Warm-up
          110.705 sec: Second pass

      RMI (Just for comparison):
          16.818 sec: Warm-up
          14.059 sec: Second pass

In summary, the new code is seven times as fast for our scenario with a higher 
number of complex types.  Where may I send the patches or may I commit 
direction to your repositories?


Thank you,

Dan Armstrong
AO Industries, Inc.


  was:
First, ADB's XMLStreamReader implementation becomes much slower when many 
complex types exist in the TypeTable.  This causes many repeated interations 
through the entire map for building the ADBNamespaceContext.  I have removed 
the necessity of this repeated iteration by:
  1) TypeTable generates any missing prefixes when new QName is added
  2) TypeTable maintains prefix->namespace and namespace->prefix mappings
  3) ADBNamespaceContext now checks in the following order (should be 
functionally equivalent to adding all complex types from TypeTable, but without 
the overhead):
    a) Use any added directly to our context
    b) Check TypeTable mappings (This is the key addition)
    c) Check parent NamespaceContext

Second, ADBNamespaceContext is fairly heavy weight in heap space.  I have 
minimized the heap space by:
  1) Delayed allocation of the internal ArraySet
  2) Using new AddOneNamespaceContext implementation when one and only one 
QName is added to the NamespaceContext.

In our system we currently have 407 complex types in the TypeTable and were 
suffering serious slowdown.  We have more types to add to this system so this 
problem needed to be addressed.  The results of my patches to ADB in Axis 1.5.2 
are:

    Times for a series of web services calls, retrieving various length 
heterogeneous arrays of complex types:

      Before modifications:
          751.554 sec: Warm-up
          752.395 sec: Second pass

      After modifications:
          138.107 sec: Warm-up
          110.705 sec: Second pass

      RMI (Just for comparison):
          16.818 sec: Warm-up
          14.059 sec: Second pass

In summary, the new code is seven times as fast for our scenario with a higher 
number of complex types.  Where may I send the patches or may I commit 
direction to your repositories?


Thank you,

Dan Armstrong
AO Industries, Inc.



> I have patches to fix poor scalability of ADB's POJO XMLStreamReader 
> implementation
> -----------------------------------------------------------------------------------
>
>                 Key: AXIS2-4880
>                 URL: https://issues.apache.org/jira/browse/AXIS2-4880
>             Project: Axis2
>          Issue Type: Improvement
>          Components: adb
>    Affects Versions: 1.5.2
>         Environment: Debian Lenny (x86_64), Java 1.6.0 (Sun), NetBeans, 
> Tomcat 6, POJO web service deployed as .aar file.
>            Reporter: Dan Armstrong
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> First, ADB's XMLStreamReader implementation becomes much slower when many 
> complex types exist in the TypeTable.  This causes many repeated interations 
> through the entire map for building the ADBNamespaceContext.  I have removed 
> the necessity of this repeated iteration by:
>   1) TypeTable generates any missing prefixes when new QName is added
>   2) TypeTable maintains prefix->namespace and namespace->prefix mappings
>   3) ADBNamespaceContext now checks in the following order (should be 
> functionally equivalent to adding all complex types from TypeTable, but 
> without the overhead):
>     a) Use any added directly to our context
>     b) Check TypeTable mappings (This is the key addition)
>     c) Check parent NamespaceContext
> Second, ADBNamespaceContext is fairly heavy weight in heap space.  I have 
> minimized the heap space by:
>   1) Delayed allocation of the internal ArraySet
>   2) Using new AddOneNamespaceContext implementation when one and only one 
> QName is added to the NamespaceContext.
> In our system we currently have 407 complex types in the TypeTable and were 
> suffering serious slowdown.  We have more types to add to this system so this 
> problem needed to be addressed.  The results of my patches to ADB in Axis 
> 1.5.2 are:
>     Times for a series of web services calls, retrieving various lengths of 
> heterogeneous arrays of complex types:
>       Before modifications:
>           751.554 sec: Warm-up
>           752.395 sec: Second pass
>       After modifications:
>           138.107 sec: Warm-up
>           110.705 sec: Second pass
>       RMI (Just for comparison):
>           16.818 sec: Warm-up
>           14.059 sec: Second pass
> In summary, the new code is seven times as fast for our scenario with a 
> higher number of complex types.  Where may I send the patches or may I commit 
> direction to your repositories?
> Thank you,
> Dan Armstrong
> AO Industries, Inc.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to