Anyone ever encountered anything like this? I found 67K pulp tasks pending on
one of my pulp servers today. I’m running pulp 2.10.3-1 (I know I know I’m
fixing that) with mongoDB 2.6 on RHEL7 with a couple hundred yum repos and one
python repo. We recently developed a config mgmt. state to manage the pulp
repos on all our pulp servers, and that appears to have caused this issue by
submitting a large number of pulp tasks in a short period and then trying again
30 minutes later. My question now, is how can I kill so many pulp tasks in a
more efficient manner than the for loop I’m using?
# A whole lotta pulp tasks
[root@pulp-server :~]# pulp-admin tasks list |grep 'Task Id' |wc -l
67016
# A for loop to generate a list of the task IDs and cancel one at a time.
for n in `pulp-admin tasks list |grep 'Task Id' |awk '{print $NF}'`; do echo
$n; pulp-admin tasks cancel --task-id $n; done
Thanks
Dustin_______________________________________________
Pulp-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/pulp-list