I took a look at https://access.redhat.com/solutions/2044393 and checked 
the queue depth on the capsule

[root@wellcapsuleext ~]# qpid-stat 
--ssl-certificate=/etc/pki/katello/qpid_client_striped.crt  -b 
amqps://localhost:5671 -q resource_manager
Properties:
  Name              Durable  AutoDelete  Exclusive  FlowStopped 
 FlowStoppedCount  Consumers  Bindings
  
======================================================================================================
  resource_manager  Y        N           N          N            0         
        1          2

Optional Properties:
  Property      Value
  
============================================================================
  arguments     {u'passive': False, u'exclusive': False, u'arguments': None}
  alt-exchange

Statistics:
  Statistic                   Messages  Bytes
  ===============================================
  queue-depth                 388       637,944
  total-enqueues              413       700,369
  total-dequeues              25        62,425
  persistent-enqueues         0         0
  persistent-dequeues         0         0
  transactional-enqueues      0         0
  transactional-dequeues      0         0
  flow-to-disk-depth          0         0
  flow-to-disk-enqueues       0         0
  flow-to-disk-dequeues       0         0
  acquires                    844
  releases                    431
  discards-ttl-expired        0
  discards-limit-overflow     0
  discards-ring-overflow      0
  discards-lvq-replace        0
  discards-subscriber-reject  0
  discards-purged             0
  reroutes                    0
[root@wellcapsuleext ~]# 


checking over the course of a few minutes showed no change, so I ran the 
workaround...

[root@wellcapsuleext ~]# for i in pulp_resource_manager pulp_workers 
pulp_celerybeat; do service $i restart; done
Redirecting to /bin/systemctl restart  pulp_resource_manager.service
Redirecting to /bin/systemctl restart  pulp_workers.service
Redirecting to /bin/systemctl restart  pulp_celerybeat.service
[root@wellcapsuleext ~]# 

the de-queues changed slightly, and the acquires/releases went up but only 
to a point, they are now stuck here...



[root@wellcapsuleext ~]# qpid-stat 
--ssl-certificate=/etc/pki/katello/qpid_client_striped.crt  -b 
amqps://localhost:5671 -q resource_manager
Properties:
  Name              Durable  AutoDelete  Exclusive  FlowStopped 
 FlowStoppedCount  Consumers  Bindings
  
======================================================================================================
  resource_manager  Y        N           N          N            0         
        1          2

Optional Properties:
  Property      Value
  
============================================================================
  arguments     {u'passive': False, u'exclusive': False, u'arguments': None}
  alt-exchange

Statistics:
  Statistic                   Messages  Bytes
  ===============================================
  queue-depth                 381       628,577
  total-enqueues              417       707,065
  total-dequeues              36        78,488
  persistent-enqueues         0         0
  persistent-dequeues         0         0
  transactional-enqueues      0         0
  transactional-dequeues      0         0
  flow-to-disk-depth          0         0
  flow-to-disk-enqueues       0         0
  flow-to-disk-dequeues       0         0
  acquires                    1,235
  releases                    818
  discards-ttl-expired        0
  discards-limit-overflow     0
  discards-ring-overflow      0
  discards-lvq-replace        0
  discards-subscriber-reject  0
  discards-purged             0
  reroutes                    0
[root@wellcapsuleext ~]#

Thanks for your help! 

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to