[ 
https://issues.apache.org/jira/browse/STORM-820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14556611#comment-14556611
 ] 

ASF GitHub Bot commented on STORM-820:
--------------------------------------

Github user d2r commented on a diff in the pull request:

    https://github.com/apache/storm/pull/554#discussion_r30921420
  
    --- Diff: storm-core/src/ui/public/templates/topology-page-template.html ---
    @@ -131,33 +131,37 @@
         </tbody>
       </table>
     </script>
    +
    +<script id="topology-visualization-container-template" type="text/html">
    --- End diff --
    
    For the visualization to initialize, it needs to call the old nimbus thrift 
API call getTopologyInfo, which has not changed.  I only wanted the 
visualization to initialize when the user shows it, and not pull down all the 
stream data for every executor each time the page loads.  If we did that, we 
would lose most of the gains from aggregating stats data on nimbus.


> UI Topology & Component Pages have long load times with large, 
> highly-connected Topologies
> ------------------------------------------------------------------------------------------
>
>                 Key: STORM-820
>                 URL: https://issues.apache.org/jira/browse/STORM-820
>             Project: Apache Storm
>          Issue Type: Improvement
>    Affects Versions: 0.11.0
>            Reporter: Derek Dagit
>            Assignee: Derek Dagit
>
> In the UI, the Topology Page and the Component Page each make a 
> getTopologyInfoWithOpts thrift call to nimbus for executor heartbeat data. 
> Metrics from this data are then aggregated in by the UI daemon for display.
> When large topologies, with high-connectedness, are viewed in this way, the 
> load times for each page can be minutes long.  In addition, heap usage by the 
> nimbus JVM can grow substantially as data for each executor, component, & 
> stream is serialized to be sent to the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to