My current hypothesis is that “gather_facts: true” is causing the issue with 
4400 (4403) hosts.

> On Dec 9, 2022, at 16:04, Mike Eggleston <[email protected]> wrote:
> 
> I would if I could. The Ansible controller should easily get by with 6G free 
> and I can only update to what’s in the company’s update pipeline.
> 
> Mike
> 
>> On Dec 9, 2022, at 16:02, Jorge Rúa <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> I'd try two things if I were on your shoes:
>> Increase memory on the ansible controller
>> Update python to a more up-to-date version
>> Regards,
>> 
>> El vie, 9 dic 2022 a las 21:44, Mike Eggleston (<[email protected] 
>> <mailto:[email protected]>>) escribió:
>> I’ve set “forks = 1” and rerun my test which does an ansible-playbook and an 
>> uptime.
>> Could it be the register rather than a memory leak?
>> Here free(1) keeps going down:
>> 
>>> [meggleston@pc1uepsiadm01 ~]$ while true
>>> > do
>>> > date
>>> > free -m
>>> > sleep 60
>>> > done
>>> Fri Dec  9 16:30:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1139        6190          18         490        
>>> 6408
>>> Swap:          4095        2897        1198
>>> Fri Dec  9 16:31:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1149        6127          19         544        
>>> 6398
>>> Swap:          4095        2896        1199
>>> Fri Dec  9 16:32:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1163        6112          19         544        
>>> 6383
>>> Swap:          4095        2896        1199
>>> Fri Dec  9 16:33:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1173        6100          19         546        
>>> 6373
>>> Swap:          4095        2896        1199
>>> Fri Dec  9 16:34:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1184        6081          19         554        
>>> 6362
>>> Swap:          4095        2895        1200
>>> Fri Dec  9 16:35:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1215        6042          20         562        
>>> 6329
>>> Swap:          4095        2885        1210
>>> Fri Dec  9 16:36:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1225        6032          20         563        
>>> 6319
>>> Swap:          4095        2884        1211
>>> Fri Dec  9 16:37:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1229        6028          20         563        
>>> 6315
>>> Swap:          4095        2880        1215
>>> Fri Dec  9 16:38:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1257        5999          20         563        
>>> 6287
>>> Swap:          4095        2861        1234
>>> Fri Dec  9 16:39:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1286        5971          21         563        
>>> 6259
>>> Swap:          4095        2860        1235
>>> Fri Dec  9 16:40:31 EST 2022
>>>               total        used        free      shared  buff/cache   
>>> available
>>> Mem:           7821        1295        5961          21         563        
>>> 6249
>>> Swap:          4095        2860        1235
>>> 
>> 
>> The playbook is really stupid:
>> 
>> [meggleston@pc1uepsiadm01 playbooks]$ cat y.yml
>> # $Id$
>> # $Log$
>>  
>> # set cyberark to a known point
>>  
>> # :!ansible-playbook --syntax-check %
>> # :!ansible-playbook --check --limit pc1uepsiadm01.res.prod.global %
>> # :!ansible-playbook --limit pc1uepsiadm01.res.prod.global %
>> # :!ansible-playbook %
>>  
>> ---
>> - hosts: all
>>   become: no
>>   gather_facts: true
>>  
>>   vars:
>>     reportfile: 
>> "/tmp/inventory-report-{{ansible_date_time.year}}{{ansible_date_time.month}}{{ansible_date_time.day}}.csv"
>>     uptime_host: "UNKNOWN"
>>  
>>   tasks:
>>     - name: "get the uptime"
>>       command: uptime
>>       register: uptime
>>  
>>     - set_fact:
>>         uptime_host="{{uptime.stdout}}"
>>       when: uptime is match("")
>>  
>>     - name: "create the file and write the header"
>>       lineinfile:
>>         path="{{reportfile}}"
>>         state="present"
>>         insertafter="EOF"
>>         create="true"
>>         line="\"hostname\",\"uptime\""
>>       delegate_to: localhost
>>  
>>     - name: "write the line for this host"
>>       lineinfile:
>>         path="{{reportfile}}"
>>         state="present"
>>         insertafter="EOF"
>>         create="true"
>>         line="\"{{ansible_host}}\",\"{{uptime_host}}\""
>>       delegate_to: localhost
>> 
>> 
>> Mike
>> 
>> 
>>> On Dec 9, 2022, at 15:29, Mike Eggleston <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Does Ansible have a memory leak (that only shows up with a high number of 
>>> hosts)?
>>> 
>>> Mike
>>> 
>>> [meggleston@pc1uepsiadm01 ~]$ rpm -qa | grep -i ansible
>>> ansible-2.9.27-1.el7.noarch
>>> 
>>>> On Dec 9, 2022, at 09:16, Mike Eggleston <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> I changed forks = back to 5 (commented out my change) and still I get the 
>>>> out of memory error. I removed all hosts that are in AWS so I’m not using 
>>>> the proxy in ssh(1). My inventory is down to 4400 hosts. I wonder what’s 
>>>> eating the memory…? Any ideas….?
>>>> 
>>>> Current stack trace:
>>>> 82"]}, "sectors": "7812935680", "start": "2048", "holders": [], "size": 
>>>> "3.64 TB"}}, "sas_device_handle": null, "sas_address": null, "virtual": 1, 
>>>> "host": "RAID bus controller: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] 
>>>> (rev 02)", "sectorsize": "512", "removable": "0", "support_discard": "0", 
>>>> "model": "PERC H730P Mini", "wwn": "0x61866da06192eb0024e6a07712d7ee30", 
>>>> "holders": [], "size": "3.64 TB"}, "dm-4": {"scheduler_mode": "", 
>>>> "rotational": "1", "vendor": null, "sectors": "104857600", "links": 
>>>> {"masters": [], "labels": [], "ids": ["dm-name-rootvg-optvol", 
>>>> "dm-uuid-LVM-h8Zoe5OZiBjf9awu8HyY4OuQIIZK52yneJbOfXRJ6QddDY581MzfUj6Ai4MOtle8"],
>>>>  "uuids": ["51166620-cd67-4954-8b5f-cf91926b036d"]}, "sas_device_handle": 
>>>> null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", 
>>>> "removable": "0", "support_discard": "0", "model": null, "partitions": {}, 
>>>> "holders": [], "size": "50.00 GB"}, "dm-5": {"scheduler_mode": "", 
>>>> "rotational": "1", "vendor": null, "sectors": "62914560", "links": 
>>>> {"masters": [], "labels": [], "ids": ["dm-name-rootvg-tmpvol", 
>>>> "dm-uuid-LVM-h8Zoe5OZiBjf9awu8HyY4OuQIIZK52ynDjjkCPQW51kaWpzKqwJkcPy2qbRW0Fxm"],
>>>>  "uuids": ["38f0cd51-d7e1-4bca-a062-1b39ede2fed2"]}, "sas_device_handle": 
>>>> null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", 
>>>> "removable": "0", "support_discard": "0", "model": null, "partitions": {}, 
>>>> "holders": [], "size": "30.00 GB"}, "dm-2": {"scheduler_mode": "", 
>>>> "rotational": "1", "vendor": null, "sectors": "125829120", "links": 
>>>> {"masters": [], "labels": [], "ids": ["dm-name-rootvg-varvol", 
>>>> "dm-uuid-LVM-h8Zoe5OZiBjf9awu8HyY4OuQIIZK52ynRn8k3kCCl3ICeXjPbpYKBa1d9B7s2bhs"],
>>>>  "uuids": ["5cb9ffc3-fd77-4906-98d7-edb27aa63f40"]}, "sas_device_handle": 
>>>> null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", 
>>>> "removable": "0", "support_discard": "0", "model": null, "partitions": {}, 
>>>> "holders": [], "size": "60.00 GB"}, "dm-3": {"scheduler_mode": "", 
>>>> "rotational": "1", "vendor": null, "sectors": "8388608", "links": 
>>>> {"masters": [], "labels": [], "ids": ["dm-name-rootvg-homevol", 
>>>> "dm-uuid-LVM-h8Zoe5OZiBjf9awu8HyY4OuQIIZK52ynVWpzyMe28x7F4igthHHVgvTM2K8ZI08R"],
>>>>  "uuids": ["8627acb7-4c2b-4394-95d0-6a084066a23a"]}, "sas_device_handle": 
>>>> null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", 
>>>> "removable": "0", "support_discard": "0", "model": null, "partitions": {}, 
>>>> "holders": [], "size": "4.00 GB"}, "dm-0": {"scheduler_mode": "", 
>>>> "rotational": "1", "vendor": null, "sectors": "8388608", "links": 
>>>> {"masters": [], "labels": [], "ids": ["dm-name-rootvg-lv_swap", 
>>>> "dm-uuid-LVM-h8Zoe5OZiBjf9awu8HyY4OuQIIZK52ynqqkyDshARxIJhfyP1hRtk5SrMN3BK79c"],
>>>>  "uuids": ["799e361d-7fee-4f6b-ae45-75d75f518985"]}, "sas_device_handle": 
>>>> null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", 
>>>> "removable": "0", "support_discard": "0", "model": null, "partitions": {}, 
>>>> "holders": [], "size": "4.00 GB"}, "dm-1": {"scheduler_mode": "", 
>>>> "rotational": "1", "vendor": null, "sectors": "41943040", "links": 
>>>> {"masters": [], "labels": [], "ids": ["dm-name-rootvg-rootvol", 
>>>> "dm-uuid-LVM-h8Zoe5OZiBjf9awu8HyY4OuQIIZK52ynPxpNeSPUUNGqorDA6GDRwK4jFcd9IzuW"],
>>>>  "uuids": ["798d9b72-8fc1-475f-92f0-0fad71bd3e5a"]}, "sas_device_handle": 
>>>> null, "sas_address": null, "virtual": 1, "host": "", "sectorsize": "512", 
>>>> "removable": "0", "support_discard": "0", "model": null, "partitions": {}, 
>>>> "holders": [], "size": "20.00 GB"}}, "ansible_user_uid": 2101067335, 
>>>> "ansible_ssh_host_key_dsa_public": 
>>>> "AAAAB3NzaC1kc3MAAACBAPPCrb44cIbwiG15T60D7doNgsOgwLOW4N76U3gvkeiJUafrLqGexH0XMMEwRhFnGGxckQGhgQE3O2ZKmlgTAYFG+qaCDBjHPGBIxKE9PcMO+enFTUYKHd4KY+xid9f3J4QnpauJZXoB4Et2GGwE0Q8fBJB7bLevybjAgAbMfM51AAAAFQCFf6SYNVwXyG0c1RYjCzeaLMB22wAAAIBm8je+yytTJ7DigfHYoleH4LrWKD0g0PeSBFVKG0snNlorhBtCGa5QIKwgR9OE+BNXQddwcqHf1jwmn54wcROWicNbdJFdIrDHHSnbzBm2tOkiNqovTLx92676L45uOZlBzNHi/bqOSzbSem9Piukn6pDu2XsfLmXfd4wz1Z3XagAAAIEA4B7lnz4xWwgjZCnX2oXiOPOOkVH2Xo7MG3YibLr8DnuK1L8n3m/pkX3WhAqrfw87OHECkCE3Kg4EPnXwW9FfNLR4YQnJBXWCU5IJ5M+HSOE5IDSTyNlj3HEs3SaGC0EU8APei7SvRc4k+TlonHu3m1XeKsB6yCNYZdtGhm5q4Ps=",
>>>>  "ansible_bios_date": "11/26/2019", "ansible_system_capabilities": 
>>>> [""]}}\r\n', 'Connection to pc1udtlhhad561.prodc1.harmony.global 
>>>> closed.\r\n')
>>>> <pc1udtlhhad561.prodc1.harmony.global> ESTABLISH SSH CONNECTION FOR USER: 
>>>> None
>>>> <pc1udtlhhad561.prodc1.harmony.global> SSH: EXEC ssh -C -o 
>>>> ControlMaster=no -o ControlPersist=30s -o ConnectTimeout=15s -o 
>>>> KbdInteractiveAuthentication=no -o 
>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey 
>>>> -o PasswordAuthentication=no -o ConnectTimeout=10 -o 
>>>> ControlPath=/home/meggleston/.ansible/cp/e963c4e7ae 
>>>> pc1udtlhhad561.prodc1.harmony.global '/bin/sh -c '"'"'rm -f -r 
>>>> /home/meggleston/.ansible/tmp/ansible-tmp-1670598305.51-22544-115269321697104/
>>>>  > /dev/null 2>&1 && sleep 0'"'"''
>>>> <pc1udtlhhad561.prodc1.harmony.global> (0, '', '')
>>>> ERROR! Unexpected Exception, this is probably a bug: [Errno 12] Cannot 
>>>> allocate memory
>>>> the full traceback was:
>>>>  
>>>> Traceback (most recent call last):
>>>>   File "/usr/bin/ansible-playbook", line 123, in <module>
>>>>     exit_code = cli.run()
>>>>   File "/usr/lib/python2.7/site-packages/ansible/cli/playbook.py", line 
>>>> 128, in run
>>>>     results = pbex.run()
>>>>   File 
>>>> "/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", 
>>>> line 169, in run
>>>>     result = self._tqm.run(play=play)
>>>>   File 
>>>> "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", 
>>>> line 282, in run
>>>>     play_return = strategy.run(iterator, play_context)
>>>>   File 
>>>> "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/linear.py", 
>>>> line 311, in run
>>>>     self._queue_task(host, task, task_vars, play_context)
>>>>   File 
>>>> "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/__init__.py", 
>>>> line 390, in _queue_task
>>>>     worker_prc.start()
>>>>   File 
>>>> "/usr/lib/python2.7/site-packages/ansible/executor/process/worker.py", 
>>>> line 100, in start
>>>>     return super(WorkerProcess, self).start()
>>>>   File "/usr/lib64/python2.7/multiprocessing/process.py", line 130, in 
>>>> start
>>>>     self._popen = Popen(self)
>>>>   File "/usr/lib64/python2.7/multiprocessing/forking.py", line 121, in 
>>>> __init__
>>>>     self.pid = os.fork()
>>>> OSError: [Errno 12] Cannot allocate memory
>>>> 
>>>> 
>>>>> On Dec 6, 2022, at 11:55, Todd Lewis <[email protected] 
>>>>> <mailto:[email protected]>> wrote:
>>>>> 
>>>>> I'm not particularly fluent in instrumenting resource consumption, but 
>>>>> I'm going out on a limb and guessing that 50 or so ssh connections is a 
>>>>> lot more light-weight than 50 forks of ansible-playbook. So ignoring ssh 
>>>>> as a possible resource limit for the moment, try changing forks back to 5 
>>>>> and running your playbook. At the same time, in another window, monitor 
>>>>> (in a way to be determined by you) resource consumption. I'd expect it to 
>>>>> work 5 forks, just not as fast as with more forks.
>>>>> 
>>>>> If it *does* work, then try it again with, say, 10 forks and compare 
>>>>> resources during that run to the 5 fork run. I expect this to also work, 
>>>>> barely, and that you'll be almost out of … something. But you'll also 
>>>>> have a much better picture of where the walls are in this black box.
>>>>> 
>>>>> On Tuesday, December 6, 2022 at 9:32:30 AM UTC-5 [email protected] 
>>>>> <http://gmail.com/> wrote:
>>>>> I’ve not changed “strategy”, but I did change “forks” from 5 to 50.
>>>>> I have copied /etc/ansible.cfg to ~/.ansible.cfg and changed forks = 50, 
>>>>> inventory = $HOME/src/ansible/inventory and log_path = 
>>>>> /tmp/${USER}_ansible.log.
>>>>> 
>>>>> 
>>>>>> On Dec 5, 2022, at 17:21, Todd Lewis <[email protected] <>> wrote:
>>>>>> 
>>>>> 
>>>>>> Have you changed any defaults for "strategy" or "forks"?
>>>>>> 
>>>>>> Also I see your ssh is config'd for "-o ControlMaster=auto -o 
>>>>>> ControlPersist=60s". I'm not sure how many hosts you're caching 
>>>>>> connections for during any give 60 second window, or how much memory 
>>>>>> that would eat, but it may be a significant factor.
>>>>>> 
>>>>>> On Monday, December 5, 2022 at 6:03:42 PM UTC-5 [email protected] 
>>>>>> <http://gmail.com/> wrote:
>>>>>> 5709
>>>>>> 
>>>>>> 
>>>>>>> On Dec 5, 2022, at 15:56, Todd Lewis <[email protected] <>> wrote:
>>>>>>> 
>>>>>> 
>>>>>>> How many hosts are in your inventory?
>>>>>>> 
>>>>>>> On Monday, December 5, 2022 at 4:52:39 PM UTC-5 [email protected] 
>>>>>>> <http://gmail.com/> wrote:
>>>>>>> I’m getting: 
>>>>>>> 
>>>>>>> e/meggleston/.ansible/tmp/ansible-tmp-1670263852.31-8085-175291763336523/
>>>>>>>  > /dev/null 2>&1 && sleep 0'"'"'' 
>>>>>>> <pa2udtlhsql602.prod.harmony.aws2> ESTABLISH SSH CONNECTION FOR USER: 
>>>>>>> None 
>>>>>>> <pa2udtlhsql602.prod.harmony.aws2> SSH: EXEC ssh -C -o 
>>>>>>> ControlMaster=auto -o ControlPersist=60s -o 
>>>>>>> KbdInteractiveAuthentication=no -o 
>>>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>>>>  -o PasswordAuthentication=no -o ConnectTimeout=10 -o 
>>>>>>> ControlPath=/home/meggleston/.ansible/cp/f012ac57b9 
>>>>>>> pa2udtlhsql602.prod.harmony.aws2 '/bin/sh -c '"'"'rm -f -r 
>>>>>>> /home/meggleston/.ansible/tmp/ansible-tmp-1670263852.91-8148-50804275661258/
>>>>>>>  > /dev/null 2>&1 && sleep 0'"'"'' 
>>>>>>> <pa2udtlhsql1023.prod.harmony.aws2> ESTABLISH SSH CONNECTION FOR USER: 
>>>>>>> None 
>>>>>>> <pa2udtlhsql604.prod.harmony.aws2> ESTABLISH SSH CONNECTION FOR USER: 
>>>>>>> None 
>>>>>>> <pa2udtlhsql1023.prod.harmony.aws2> SSH: EXEC ssh -C -o 
>>>>>>> ControlMaster=auto -o ControlPersist=60s -o 
>>>>>>> KbdInteractiveAuthentication=no -o 
>>>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>>>>  -o PasswordAuthentication=no -o ConnectTimeout=10 -o 
>>>>>>> ControlPath=/home/meggleston/.ansible/cp/754f3010c5 
>>>>>>> pa2udtlhsql1023.prod.harmony.aws2 '/bin/sh -c '"'"'rm -f -r 
>>>>>>> /home/meggleston/.ansible/tmp/ansible-tmp-1670263852.31-8085-175291763336523/
>>>>>>>  > /dev/null 2>&1 && sleep 0'"'"'' 
>>>>>>> <pa2udtlhsql604.prod.harmony.aws2> SSH: EXEC ssh -C -o 
>>>>>>> ControlMaster=auto -o ControlPersist=60s -o 
>>>>>>> KbdInteractiveAuthentication=no -o 
>>>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>>>>  -o PasswordAuthentication=no -o ConnectTimeout=10 -o 
>>>>>>> ControlPath=/home/meggleston/.ansible/cp/09c53a2792 
>>>>>>> pa2udtlhsql604.prod.harmony.aws2 '/bin/sh -c '"'"'rm -f -r 
>>>>>>> /home/meggleston/.ansible/tmp/ansible-tmp-1670263853.52-8164-79599240649234/
>>>>>>>  > /dev/null 2>&1 && sleep 0'"'"'' 
>>>>>>> <pa2udtlhsql1020.prod.harmony.aws2> ESTABLISH SSH CONNECTION FOR USER: 
>>>>>>> None 
>>>>>>> <pa2udtlhsql1020.prod.harmony.aws2> SSH: EXEC ssh -C -o 
>>>>>>> ControlMaster=auto -o ControlPersist=60s -o 
>>>>>>> KbdInteractiveAuthentication=no -o 
>>>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>>>>  -o PasswordAuthentication=no -o ConnectTimeout=10 -o 
>>>>>>> ControlPath=/home/meggleston/.ansible/cp/3301cea578 
>>>>>>> pa2udtlhsql1020.prod.harmony.aws2 '/bin/sh -c '"'"'rm -f -r 
>>>>>>> /home/meggleston/.ansible/tmp/ansible-tmp-1670263852.15-8057-21113899783559/
>>>>>>>  > /dev/null 2>&1 && sleep 0'"'"'' 
>>>>>>> <pa2udtlhsql602.prod.harmony.aws2> ESTABLISH SSH CONNECTION FOR USER: 
>>>>>>> None 
>>>>>>> <pa2udtlhsql602.prod.harmony.aws2> SSH: EXEC ssh -C -o 
>>>>>>> ControlMaster=auto -o ControlPersist=60s -o 
>>>>>>> KbdInteractiveAuthentication=no -o 
>>>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>>>>  -o PasswordAuthentication=no -o ConnectTimeout=10 -o 
>>>>>>> ControlPath=/home/meggleston/.ansible/cp/f012ac57b9 
>>>>>>> pa2udtlhsql602.prod.harmony.aws2 '/bin/sh -c '"'"'rm -f -r 
>>>>>>> /home/meggleston/.ansible/tmp/ansible-tmp-1670263852.91-8148-50804275661258/
>>>>>>>  > /dev/null 2>&1 && sleep 0'"'"'' 
>>>>>>> <pa2udtlhsql1022.prod.harmony.aws2> ESTABLISH SSH CONNECTION FOR USER: 
>>>>>>> None 
>>>>>>> <pa2udtlhsql1022.prod.harmony.aws2> SSH: EXEC ssh -C -o 
>>>>>>> ControlMaster=auto -o ControlPersist=60s -o 
>>>>>>> KbdInteractiveAuthentication=no -o 
>>>>>>> PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
>>>>>>>  -o PasswordAuthentication=no -o ConnectTimeout=10 -o 
>>>>>>> ControlPath=/home/meggleston/.ansible/cp/a501b68168 
>>>>>>> pa2udtlhsql1022.prod.harmony.aws2 '/bin/sh -c '"'"'rm -f -r 
>>>>>>> /home/meggleston/.ansible/tmp/ansible-tmp-1670263852.07-8072-136961495388876/
>>>>>>>  > /dev/null 2>&1 && sleep 0'"'"'' 
>>>>>>> ERROR! Unexpected Exception, this is probably a bug: [Errno 12] Cannot 
>>>>>>> allocate memory 
>>>>>>> the full traceback was: 
>>>>>>> 
>>>>>>> Traceback (most recent call last): 
>>>>>>> File "/usr/bin/ansible-playbook", line 123, in <module> 
>>>>>>> exit_code = cli.run() 
>>>>>>> File "/usr/lib/python2.7/site-packages/ansible/cli/playbook.py", line 
>>>>>>> 128, in run 
>>>>>>> results = pbex.run() 
>>>>>>> File 
>>>>>>> "/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py",
>>>>>>>  line 169, in run 
>>>>>>> result = self._tqm.run(play=play) 
>>>>>>> File 
>>>>>>> "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py",
>>>>>>>  line 282, in run 
>>>>>>> play_return = strategy.run(iterator, play_context) 
>>>>>>> File 
>>>>>>> "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/linear.py", 
>>>>>>> line 311, in run 
>>>>>>> self._queue_task(host, task, task_vars, play_context) 
>>>>>>> File 
>>>>>>> "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/__init__.py",
>>>>>>>  line 390, in _queue_task 
>>>>>>> worker_prc.start() 
>>>>>>> File 
>>>>>>> "/usr/lib/python2.7/site-packages/ansible/executor/process/worker.py", 
>>>>>>> line 100, in start 
>>>>>>> return super(WorkerProcess, self).start() 
>>>>>>> File "/usr/lib64/python2.7/multiprocessing/process.py", line 130, in 
>>>>>>> start 
>>>>>>> self._popen = Popen(self) 
>>>>>>> File "/usr/lib64/python2.7/multiprocessing/forking.py", line 121, in 
>>>>>>> __init__ 
>>>>>>> self.pid = os.fork() 
>>>>>>> 
>>>>>>> When I run a stupid playbook with the command: anssible-playbook -vvv 
>>>>>>> 1.yml 
>>>>>>> 
>>>>>>> for the playbook: 
>>>>>>> — 
>>>>>>> - hosts: all 
>>>>>>> become: yes 
>>>>>>> gather_facts: true 
>>>>>>> 
>>>>>>> tasks: 
>>>>>>> - name: “remove the file” 
>>>>>>> file: 
>>>>>>> path=/tmp/getspace.sh 
>>>>>>> state=absent 
>>>>>>> 
>>>>>>> I added the “-vvv” when the previous run told me to. Any ideas what’s 
>>>>>>> going on (besides the obvious “out of memory”) and how to fix this? 
>>>>>>> 
>>>>>>> Mike
>>>>>>> 
>>>>>> 
>>>>>>> -- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "Ansible Project" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>>> an email to [email protected] <>.
>>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/ansible-project/83646132-5874-4829-9152-127bee997178n%40googlegroups.com
>>>>>>>  
>>>>>>> <https://groups.google.com/d/msgid/ansible-project/83646132-5874-4829-9152-127bee997178n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>>>>>> 
>>>>>> 
>>>>>> -- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "Ansible Project" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>> an email to [email protected] <>.
>>>>> 
>>>>>> To view this discussion on the web visit 
>>>>>> https://groups.google.com/d/msgid/ansible-project/bda55e7c-e0ac-4da2-ab19-5ce504218d9dn%40googlegroups.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/ansible-project/bda55e7c-e0ac-4da2-ab19-5ce504218d9dn%40googlegroups.com?utm_medium=email&utm_source=footer>.
>>>>> 
>>>>> 
>>>>> -- 
>>>>> You received this message because you are subscribed to the Google Groups 
>>>>> "Ansible Project" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>>> email to [email protected] 
>>>>> <mailto:[email protected]>.
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/ansible-project/79eec9ab-8973-4be5-8a66-b122c96c6fa7n%40googlegroups.com
>>>>>  
>>>>> <https://groups.google.com/d/msgid/ansible-project/79eec9ab-8973-4be5-8a66-b122c96c6fa7n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>>>> 
>>> 
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] 
>> <mailto:[email protected]>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/738861CE-F899-46B2-9C9F-2532BA2D0DA6%40gmail.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/738861CE-F899-46B2-9C9F-2532BA2D0DA6%40gmail.com?utm_medium=email&utm_source=footer>.
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] 
>> <mailto:[email protected]>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/CAFtje5MRZ1H%2B2KJFAGUr46wpSLXj8DHOFfZ92d2d5s3CZiVmsQ%40mail.gmail.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/CAFtje5MRZ1H%2B2KJFAGUr46wpSLXj8DHOFfZ92d2d5s3CZiVmsQ%40mail.gmail.com?utm_medium=email&utm_source=footer>.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/CFE4D343-09BC-4831-9433-DA841A20DBBF%40gmail.com.

Reply via email to