these issues raised em 2016/2017 as we runned latest rhel mrg 3 brokers.
Not using qpid since that.
But I remember this issue because it's not a memory leak issue, but a
feature in std::vector (C++ STL) that does not release memory when removing
items.
C++ 11 version introduced shrink_to_fit() to deal with those issues in that
time, and this made our patched broker happy in that time.

On Sat, Dec 16, 2023 at 2:21 AM Nilesh Khokale (Contractor) <
nilesh.khok...@theodpcorp.com> wrote:

> Hi Virgilio,
>
> What versions of Qpid and Broker were you using when you applied the
> patches to the broker (As we are currently using QPID 1.19.0 and Broker
> 2.27.1)? Additionally, could you please provide guidance on confirming
> whether the memory issue is associated with the usage of vectors within
> these long-running objects? If possible, please share any steps to follow.
> Thank you
>
>
>
> @Ted Ross <tr...@redhat.com>
>
> Please find the below output of qdstat -m command for the qpid1 server
> which is consuming high memory currently.
>
>
> Memory Pools
>
>   type                        size   batch  thread-max  total
> in-threads  rebal-in    rebal-out
>
>
> ==================================================================================================
>
>   qd_bitmask_t                24     64     128         1,920
> 1,344       382,869     382,878
>
>   qd_buffer_t                 536    64     128         8,192
> 384         12,902,136  12,902,258
>
>   qd_composed_field_t         64     64     128         320
> 320         0           0
>
>   qd_composite_t              112    64     128         320
> 320         0           0
>
>   qd_connection_t             2,472  16     32          1,920
> 1,744       12,192      12,203
>
>   qd_connector_t              512    64     128         64
> 64          0           0
>
>   qd_deferred_call_t          32     64     128         384
> 256         312         314
>
>   qd_hash_handle_t            16     64     128         9,536
> 9,536       0           0
>
>   qd_hash_item_t              40     64     128         9,536
> 9,536       0           0
>
>   qd_iterator_t               160    64     128         3,648
> 384         275,400     275,451
>
>   qd_link_ref_t               24     64     128         2,368   192
>     60,144      60,178
>
>   qd_link_t                   144    64     128         7,872
> 7,488       1,276       1,282
>
>   qd_listener_t               456    64     128         64
> 64          0           0
>
>   qd_log_entry_t              2,112  16     32          1,088
> 1,088       0           0
>
>   qd_management_context_t     48     64     128         64
> 64          0           0
>
>   qd_message_content_t        1,040  64     128         1,472
> 384         37,587      37,604
>
>   qd_message_t                128    64     128         2,688
> 384         1,716,133   1,716,169
>
>   qd_node_t                   56     64     128         64
> 64          0           0
>
>   qd_parse_node_t             112    64     128         1,984
> 1,984       0           0
>
>   qd_parse_tree_t             32     64     128         64
> 64          0           0
>
>   qd_parsed_field_t           144    64     128         2,560
> 512         126,057     126,089
>
>   qd_pn_free_link_session_t   32     64     128         256
> 256         0           0
>
>   qd_session_t                56     64     128         6,400
> 5,888       1,391       1,399
>
>   qd_timer_t                  72     64     128         64
> 64          0           0
>
>   qdr_action_t                136    64     128         3,328
> 192         65,499,099  65,499,148
>
>   qdr_address_config_t        80     64     128         576
> 576         0           0
>
>   qdr_address_t               416    64     128         1,152
> 1,152       0           0
>
>   qdr_auto_link_t             144    64     128         6,464
> 6,464       0           0
>
>   qdr_conn_identifier_t       112    64     128         64
> 64          0           0
>
>   qdr_connection_info_t       88     64     128         2,112
> 1,920       51,340      51,343
>
>   qdr_connection_ref_t        24     64     128         64
> 64          0           0
>
>   qdr_connection_t            632    64     128         2,112
> 1,920       51,340      51,343
>
>   qdr_connection_work_t       56     64     128         2,304
> 448         104,372     104,401
>
>   qdr_core_timer_t            40     64     128         64
> 64          0           0
>
>   qdr_delivery_cleanup_t      32     64     128         704
> 448         3,413,056   3,413,060
>
>   qdr_delivery_ref_t          24     64     128         512
> 384         1,671,475   1,671,477
>
>   qdr_delivery_t              312    64     128         2,496
> 192         1,709,088   1,709,124
>
>   qdr_field_t                 40     64     128         3,136
> 256         79,489      79,534
>
>   qdr_forward_deliver_info_t  32     64     128         64
> 64          0           0
>
>   qdr_general_work_t          136    64     128         576
> 384         3,075,379   3,075,382
>
>   qdr_link_ref_t              24     64     128         17,088
> 13,888      32,905,297  32,905,347
>
>   qdr_link_t                  552    64     128         7,936
> 7,488       104,286     104,293
>
>   qdr_link_work_t             48     64     128         3,456
> 512         32,470,813  32,470,859
>
>   qdr_node_t                  88     64     128         64
>    64          0           0
>
>   qdr_query_t                 344    64     128         192
> 192         107         107
>
>   qdr_terminus_t              64     64     128         6,016
> 320         1,553       1,642
>
>   qdtm_router_t               16     64     128         128
> 128         0           0
>
>
>
> Memory Summary
>
>   VmSize    Pooled
>
>   ====================
>
>   12.1 GiB  25.5 MiB
>
>
> *Below is the current statistics for qpid1 server -*
>
> Router Statistics
>
>   attr                             value
>
>   ===================================================
>
>   Version                          1.19.0
>
>   Mode                             interior
>
>   Router Id                        od-router-1-prod
>
>   Worker Threads                   4
>
>   Uptime                           006:21:31:41
>
>   VmSize                           12.1 GiB
>
>   Area                             0
>
>   Link Routes                      0
>
>   Auto Links                       6418
>
>   Links                            7246
>
>   Nodes                            1
>
>   Addresses                        1088
>
>   Connections                      1690
>
>   Presettled Count                 0
>
>   Dropped Presettled Count         0
>
>   Accepted Count                   106722455
>
>   Rejected Count                   0
>
>   Released Count                   273624
>
>   Modified Count                   962
>
>   Deliveries Delayed > 1sec        4901525
>
>   Deliveries Delayed > 10sec       775262
>
>   Deliveries Stuck > 10sec         0
>
>   Deliveries to Fallback           0
>
>   Links Blocked                    1002
>
>   Ingress Count                    84870446
>
>   Egress Count                     106011425
>
>   Transit Count                    712114
>
>   Deliveries from Route Container  52221048
>
>   Deliveries to Route Container    32649333
>
>
>
> *Thanks,*
>
> *Nilesh Khokale*
>
>
>
> *From:* Virgilio Fornazin <virgilioforna...@gmail.com>
> *Sent:* Friday, December 15, 2023 4:23 PM
> *To:* users@qpid.apache.org
> *Cc:* Ajit Tathawade (Contractor) <ajit.tathaw...@theodpcorp.com>; Nilesh
> Khokale (Contractor) <nilesh.khok...@theodpcorp.com>; Ted Ross <
> tr...@redhat.com>
> *Subject:* Re: High Memory consumption with Qpid Dispatch 1.19.0
>
>
>
> [CAUTION: EXTERNAL SENDER]
>
> We had this issue with qpid ++ broker and was related to vectors used in
> long running objects (queues etc) c++11 introduced shrink_to_fit() and we
> patched the broker at that time to avoid those memory issues.
>
>
>
> On Fri, 15 Dec 2023 at 18:21 Ekta Awasthi <
> ekta.awas...@theodpcorp.com.invalid> wrote:
>
> Hello Tod,
>
>
>
> while running the qdstat -m command, I get below error.
>
>
>
> -sh-4.2$ qdstat -a
>
> ConnectionException: Connection amqp://0.0.0.0:amqp disconnected:
> Condition('proton.pythonio', 'Connection refused to all addresses')
>
>
>
> *Ekta Awasthi*,
>
> Engineer, EAI Operations & Support | Office Depot, Inc.
> 6600 North Military Trail | Boca Raton, FL 33496-2434
> <https://www.google.com/maps/search/6600+North+Military+Trail+%7C+Boca+Raton,+FL+33496-2434+%0D%0AOffice:+561?entry=gmail&source=g>
> Office: 561
> <https://www.google.com/maps/search/6600+North+Military+Trail+%7C+Boca+Raton,+FL+33496-2434+%0D%0AOffice:+561?entry=gmail&source=g>-438-3552
> | Mobile: 206-966-5577 | ekta.awas...@officedepot.com
>
>
>
> *-- Tips for EAI Support Engagement --*
>
> -EAI Pre-Prod Support: Create requests on the following JIRA board EAI
> Operations Support
> <https://officedepot.atlassian.net/secure/RapidBoard.jspa?rapidView=823&projectKey=EOS>
>
> -EAI Production Support: Create requests via IT Service Desk
> <https://portal.compucom.com/SSO/Default.aspx?init=2468> self-service
> portal, instructions click here
> <https://officedepot.sharepoint.com/sites/portal/TechBytes/EUS/EUS%20User%20Guides/SERVICE%20DESK/RESOLVER%20PORTAL/Self-Service%20Portal%20-%20Resolver%20(IT%20to%20IT)%20Incident%20Request%20Form.pdf>:
> EAI Support queue --> ODP - Enterprise Apps Integration Support
>
> -As a reminder, the Service Availability Managers should be engaged for
> any service impacting issues, with a ***Page*** to
> naitavailabilitym...@officedepot.com or by initiating a MIRT
> ------------------------------
>
> *From:* Ted Ross <tr...@redhat.com>
> *Sent:* Friday, December 15, 2023 2:48 PM
> *To:* Ekta Awasthi <ekta.awas...@theodpcorp.com>
> *Cc:* users@qpid.apache.org <users@qpid.apache.org>; Ajit Tathawade
> (Contractor) <ajit.tathaw...@theodpcorp.com>; Nilesh Khokale (Contractor)
> <nilesh.khok...@theodpcorp.com>
> *Subject:* Re: High Memory consumption with Qpid Dispatch 1.19.0
>
>
>
> [CAUTION: EXTERNAL SENDER]
>
> Ekta,
>
>
>
> You can get more granular memory-use data by using the "qdstat -m" command
> against the router when its memory footprint is larger than you think it
> should be.
>
>
>
> I assume you've been using this version for some time.  It might be
> helpful to look into what other things changed right before the memory
> consumption problem started.
>
>
>
> -Ted
>
>
>
> On Fri, Dec 15, 2023 at 12:08 PM Ekta Awasthi <ekta.awas...@theodpcorp.com>
> wrote:
>
> Hi All & Tod,
>
>
>
> We are currently encountering elevated memory consumption with qpid
> dispatch version 1.19.0. Although the memory is released upon restarting
> qpid, it gradually accumulates again, surpassing 80% memory usage. As QPID
> in our case servers as a routing mechanism, handling traffic from NLB to
> QPID and then to the broker. While investigating the cause of this behavior
> and examining memory usage from New Relic (NR) graph indicates that the
> qdrouterd process is responsible for the memory consumption. We are seeking
> insights into the root cause of this issue and whether it may be related to
> the version (1.19.0). Please find additional information below.
>
>
>
> *Architecture*:
>
> NLB --> QPID(2 qpids acting as consumers) --> BROKER (Total of 3 pairs.
> Master/Slave configuration for HA)
>
>
>
> *Qpids* were restarted on 12-10-23 as you can see below the gradual
> increase has been happening ever since.
>
>
>
> *Ekta Awasthi*
>
>
> CONFIDENTIALITY NOTICE: The information contained in this email and
> attached document(s) may contain confidential information that is intended
> only for the addressee(s). If you are not the intended recipient, you are
> hereby advised that any disclosure, copying, distribution or the taking of
> any action in reliance upon the information is prohibited. If you have
> received this email in error, please immediately notify the sender and
> delete it from your system.
>
>

Reply via email to