Can you try to create mutliple ceph volumes manually via rbd from the
engine machine, so we can simulate what cinderlib does without using it,
this can be done
$ rbd -c ceph.conf create <pool_name>/vol1 --size 100M
$ rbd -c ceph.conf create <pool_name>/vol2 --size 100M

On Mon, Jul 8, 2019 at 4:58 PM Dan Poltawski <dan.poltaw...@tnp.net.uk>
wrote:

> On Mon, 2019-07-08 at 16:49 +0300, Benny Zlotnik wrote:
> > Not too useful unfortunately :\
> > Can you try py-list instead of py-bt? Perhaps it will provide better
> > results
>
> (gdb) py-list
>   57                if get_errno(ex) != errno.EEXIST:
>   58                    raise
>   59            return listener
>   60
>   61        def do_poll(self, seconds):
>  >62            return self.poll.poll(seconds)
> >
>
>
> Thanks for you help,
>
>
> Dan
>
> > On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> > > > Hi,
> > > >
> > > > You have a typo, it's py-bt and I just tried it myself, I only
> > > had to
> > > > install:
> > > > $ yum install -y python-devel
> > > > (in addition to the packages specified in the link)
> > >
> > > Thanks - this is what I get:
> > >
> > > #3 Frame 0x7f2046b59ad0, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/epolls.py, line 62, in do_poll
> > > (self=<Hub(next_timers=[], clock=<function at remote
> > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify=<built-
> > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > > <FdListener(fileno=20, cb=<built-in method switch of
> > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=<greenlet.greenlet at remote 0x7f2046878410>,
> > > evtype='read',
> > > mark_as_closed=<instancemethod at remote 0x7f2046b09500>,
> > > tb=<built-in
> > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=<greenlet.greenlet at remote 0x7f2046af70f0>, closed=[],
> > > stopping=False, timers=[(<float at remote 0x2830e20>,
> > > <Timer(seconds=<float at remote 0xced610>, tpl=(<built-in method
> > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=F...(truncated)
> > >     return self.poll.poll(seconds)
> > > #6 Frame 0x32fbf30, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/poll.py, line 85, in wait
> > > (self=<Hub(next_timers=[], clock=<function at remote
> > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify=<built-
> > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > > <FdListener(fileno=20, cb=<built-in method switch of
> > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=<greenlet.greenlet at remote 0x7f2046878410>,
> > > evtype='read',
> > > mark_as_closed=<instancemethod at remote 0x7f2046b09500>,
> > > tb=<built-in
> > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=<greenlet.greenlet at remote 0x7f2046af70f0>, closed=[],
> > > stopping=False, timers=[(<float at remote 0x2830e20>,
> > > <Timer(seconds=<float at remote 0xced610>, tpl=(<built-in method
> > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=False) at r...(truncated)
> > >     presult = self.do_poll(seconds)
> > > #10 Frame 0x7f2046afca00, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/hub.py, line 346, in run
> > > (self=<Hub(next_timers=[], clock=<function at remote
> > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify=<built-
> > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > > <FdListener(fileno=20, cb=<built-in method switch of
> > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=<greenlet.greenlet at remote 0x7f2046878410>,
> > > evtype='read',
> > > mark_as_closed=<instancemethod at remote 0x7f2046b09500>,
> > > tb=<built-in
> > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=<greenlet.greenlet at remote 0x7f2046af70f0>, closed=[],
> > > stopping=False, timers=[(<float at remote 0x2830e20>,
> > > <Timer(seconds=<float at remote 0xced610>, tpl=(<built-in method
> > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=False) ...(truncated)
> > >     self.wait(sleep_time)
> > >
> > >
> > >
> > > >
> > > > On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski <
> > > > dan.poltaw...@tnp.net.uk> wrote:
> > > > > Hi,
> > > > >
> > > > > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> > > > > > > Any chance you can setup gdb[1] so we can find out where
> > > it's
> > > > > > stuck
> > > > > > > exactly?
> > > > >
> > > > > Yes, abolutely - but I will need some assistance in getting GDB
> > > > > configured in the engine as I am not very familar with it - or
> > > how
> > > > > to enable the correct repos to get the debug info.
> > > > >
> > > > > $ gdb python 54654
> > > > >
> > > > > [...]
> > > > >
> > > > > Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols
> > > > > from /lib64/libfreeblpriv3.so...(no debugging symbols
> > > > > found)...done.
> > > > > (no debugging symbols found)...done.
> > > > > Loaded symbols for /lib64/libfreeblpriv3.so
> > > > > 0x00007fcf82256483 in epoll_wait () from /lib64/libc.so.6
> > > > > Missing separate debuginfos, use: debuginfo-install python-
> > > 2.7.5-
> > > > > 80.el7_6.x86_64
> > > > > (gdb) pt-bt
> > > > > Undefined command: "pt-bt".  Try "help".
> > > > >
> > > > >
> > > > > > > Also, which version of ovirt are you using?
> > > > >
> > > > > Using 4.3.4
> > > > >
> > > > > > > Can you also check the ceph logs for anything suspicious?
> > > > >
> > > > > I haven't seen anything so far, but is an entirely resonable
> > > > > possibility this is ceph misoconfiguraiton as we are learning
> > > about
> > > > > both tools.
> > > > >
> > > > >
> > > > > thanks,
> > > > >
> > > > > Dan
> > > > >
> > > > > > >
> > > > > > >
> > > > > > > [1] - https://wiki.python.org/moin/DebuggingWithGdb
> > > > > > > $ gdb python <cinderlib pid>
> > > > > > > then `py-bt`
> > > > > > >
> > > > > > > On Thu, Jul 4, 2019 at 7:00 PM <dan.poltaw...@tnp.net.uk>
> > > > > > wrote:
> > > > > > > > > > Can you provide logs? mainly engine.log and
> > > cinderlib.log
> > > > > > > > > > (/var/log/ovirt-engine/cinderlib/cinderlib.log
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > If I create two volumes, the first one succeeds
> > > > > > > successfully, the
> > > > > > > > > second one hangs. If I look in the processlist after
> > > > > > > creating the
> > > > > > > > > second volume which doesn't succceed, I see the python
> > > > > > > ./cinderlib-
> > > > > > > > > client.py create_volume [...] command still running.
> > > > > > > > >
> > > > > > > > > On the ceph side, I can see only the one rbd volume.
> > > > > > > > >
> > > > > > > > > Logs below:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --- cinderlib.log --
> > > > > > > > >
> > > > > > > > > 2019-07-04 16:46:30,863 - cinderlib-client - INFO -
> > > Fetch
> > > > > > > backend
> > > > > > > > > stats [b07698bb-1688-472f-841b-70a9d52a250d]
> > > > > > > > > 2019-07-04 16:46:56,308 - cinderlib-client - INFO -
> > > > > > > Creating volume
> > > > > > > > > '236285cc-ac01-4239-821c-4beadd66923f', with size '2'
> > > GB
> > > > > > > [0b0f0d6f-
> > > > > > > > > cb20-440a-bacb-7f5ead2b4b4d]
> > > > > > > > > 2019-07-04 16:47:21,671 - cinderlib-client - INFO -
> > > > > > > Creating volume
> > > > > > > > > '84886485-554a-44ca-964c-9758b4a16aae', with size '2'
> > > GB
> > > > > > > [a793bfc9-
> > > > > > > > > fc37-4711-a144-d74c100cc75b]
> > > > > > > > >
> > > > > > > > > --- engine.log ---
> > > > > > > > >
> > > > > > > > > 2019-07-04 16:46:54,062+01 INFO
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (default
> > > > > > > > > task-22) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running
> > > > > > > command:
> > > > > > > > > AddDiskCommand internal: false. Entities affected :
> > > ID:
> > > > > > > 31536d80-
> > > > > > > > > ff45-496b-9820-15441d505924 Type: StorageAction group
> > > > > > > CREATE_DISK
> > > > > > > > > with role type USER
> > > > > > > > > 2019-07-04 16:46:54,150+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand] (EE-ManagedThreadFactory-
> > > > > > > commandCoordinator-
> > > > > > > > > Thread-1) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d]
> > > Running
> > > > > > > command:
> > > > > > > > > AddManagedBlockStorageDiskCommand internal: true.
> > > > > > > > > 2019-07-04 16:46:56,863+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor
> > > > > > > ]
> > > > > > > > > (EE-ManagedThreadFactory-commandCoordinator-Thread-1)
> > > > > > > [0b0f0d6f-
> > > > > > > > > cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> > > > > > > > > 2019-07-04 16:46:56,912+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDi
> > > > > > > rect
> > > > > > > > > or] (default task-22) [] EVENT_ID:
> > > > > > > > > USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0'
> > > was
> > > > > > > > > successfully added.
> > > > > > > > > 2019-07-04 16:47:00,126+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCall
> > > > > > > back
> > > > > > > > > ] (EE-ManagedThreadFactory-engineScheduled-Thread-95)
> > > > > > > [0b0f0d6f-
> > > > > > > > > cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id:
> > > > > > > '15fe157d-7adb-
> > > > > > > > > 4031-9e81-f51aa0b6528f' child commands '[d056397a-7ed9-
> > > > > > > 4c01-b880-
> > > > > > > > > dd518421a2c6]' executions were completed, status
> > > > > > > 'SUCCEEDED'
> > > > > > > > > 2019-07-04 16:47:01,136+01 INFO
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineScheduled-Thread-99)
> > > [0b0f0d6f-
> > > > > > > cb20-
> > > > > > > > > 440a-bacb-7f5ead2b4b4d] Ending command
> > > > > > > > > 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand'
> > > > > > > > > successfully.
> > > > > > > > > 2019-07-04 16:47:01,141+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand] (EE-ManagedThreadFactory-
> > > > > > > engineScheduled-
> > > > > > > > > Thread-99) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d]
> > > Ending
> > > > > > > command
> > > > > > > > >
> > > > > > >
> > > 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand' successfully.
> > > > > > > > > 2019-07-04 16:47:01,145+01 WARN
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineScheduled-Thread-99) [] VM
> > > is
> > > > > > > null - no
> > > > > > > > > unlocking
> > > > > > > > > 2019-07-04 16:47:01,186+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDi
> > > > > > > rect
> > > > > > > > > or] (EE-ManagedThreadFactory-engineScheduled-Thread-99)
> > > []
> > > > > > > > > EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The
> > > disk
> > > > > > > 'test0'
> > > > > > > > > was successfully added.
> > > > > > > > > 2019-07-04 16:47:19,446+01 INFO
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (default
> > > > > > > > > task-22) [a793bfc9-fc37-4711-a144-d74c100cc75b] Running
> > > > > > > command:
> > > > > > > > > AddDiskCommand internal: false. Entities affected :
> > > ID:
> > > > > > > 31536d80-
> > > > > > > > > ff45-496b-9820-15441d505924 Type: StorageAction group
> > > > > > > CREATE_DISK
> > > > > > > > > with role type USER
> > > > > > > > > 2019-07-04 16:47:19,464+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand] (EE-ManagedThreadFactory-
> > > > > > > commandCoordinator-
> > > > > > > > > Thread-2) [a793bfc9-fc37-4711-a144-d74c100cc75b]
> > > Running
> > > > > > > command:
> > > > > > > > > AddManagedBlockStorageDiskCommand internal: true.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'commandCoordinator' is using 1 threads out of 10,
> > > 1
> > > > > > > threads
> > > > > > > > > waiting for tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'default' is using 0 threads out of 1, 5 threads
> > > > > > > waiting for
> > > > > > > > > tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'engine' is using 0 threads out of 500, 9 threads
> > > > > > > waiting for
> > > > > > > > > tasks and 0 tasks in queue.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'engineScheduled' is using 0 threads out of 100,
> > > 100
> > > > > > > threads
> > > > > > > > > waiting for tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'engineThreadMonitoring' is using 1 threads out of
> > > 1,
> > > > > > > 0
> > > > > > > > > threads waiting for tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'hostUpdatesChecker' is using 0 threads out of 5,
> > > 1
> > > > > > > threads
> > > > > > > > > waiting for tasks.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > 2019-07-04 16:46:54,062+01 INFO
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (default
> > > > > > > > > task-22) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running
> > > > > > > command:
> > > > > > > > > AddDiskCommand internal: false. Entities affected :
> > > ID:
> > > > > > > 31536d80-
> > > > > > > > > ff45-496b-9820-15441d505924 Type: StorageAction group
> > > > > > > CREATE_DISK
> > > > > > > > > with role type USER
> > > > > > > > > 2019-07-04 16:46:54,150+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand] (EE-ManagedThreadFactory-
> > > > > > > commandCoordinator-
> > > > > > > > > Thread-1) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d]
> > > Running
> > > > > > > command:
> > > > > > > > > AddManagedBlockStorageDiskCommand internal: true.
> > > > > > > > > 2019-07-04 16:46:56,863+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor
> > > > > > > ]
> > > > > > > > > (EE-ManagedThreadFactory-commandCoordinator-Thread-1)
> > > > > > > [0b0f0d6f-
> > > > > > > > > cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> > > > > > > > > 2019-07-04 16:46:56,912+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDi
> > > > > > > rect
> > > > > > > > > or] (default task-22) [] EVENT_ID:
> > > > > > > > > USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0'
> > > was
> > > > > > > > > successfully added.
> > > > > > > > > 2019-07-04 16:47:00,126+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCall
> > > > > > > back
> > > > > > > > > ] (EE-ManagedThreadFactory-engineScheduled-Thread-95)
> > > > > > > [0b0f0d6f-
> > > > > > > > > cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id:
> > > > > > > '15fe157d-7adb-
> > > > > > > > > 4031-9e81-f51aa0b6528f' child commands '[d056397a-7ed9-
> > > > > > > 4c01-b880-
> > > > > > > > > dd518421a2c6]' executions were completed, status
> > > > > > > 'SUCCEEDED'
> > > > > > > > > 2019-07-04 16:47:01,136+01 INFO
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineScheduled-Thread-99)
> > > [0b0f0d6f-
> > > > > > > cb20-
> > > > > > > > > 440a-bacb-7f5ead2b4b4d] Ending command
> > > > > > > > > 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand'
> > > > > > > > > successfully.
> > > > > > > > > 2019-07-04 16:47:01,141+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand] (EE-ManagedThreadFactory-
> > > > > > > engineScheduled-
> > > > > > > > > Thread-99) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d]
> > > Ending
> > > > > > > command
> > > > > > > > >
> > > > > > >
> > > 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand' successfully.
> > > > > > > > > 2019-07-04 16:47:01,145+01 WARN
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineScheduled-Thread-99) [] VM
> > > is
> > > > > > > null - no
> > > > > > > > > unlocking
> > > > > > > > > 2019-07-04 16:47:01,186+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDi
> > > > > > > rect
> > > > > > > > > or] (EE-ManagedThreadFactory-engineScheduled-Thread-99)
> > > []
> > > > > > > > > EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The
> > > disk
> > > > > > > 'test0'
> > > > > > > > > was successfully added.
> > > > > > > > > 2019-07-04 16:47:19,446+01 INFO
> > > > > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > > > > (default
> > > > > > > > > task-22) [a793bfc9-fc37-4711-a144-d74c100cc75b] Running
> > > > > > > command:
> > > > > > > > > AddDiskCommand internal: false. Entities affected :
> > > ID:
> > > > > > > 31536d80-
> > > > > > > > > ff45-496b-9820-15441d505924 Type: StorageAction group
> > > > > > > CREATE_DISK
> > > > > > > > > with role type USER
> > > > > > > > > 2019-07-04 16:47:19,464+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManaged
> > > > > > > Bloc
> > > > > > > > > kStorageDiskCommand] (EE-ManagedThreadFactory-
> > > > > > > commandCoordinator-
> > > > > > > > > Thread-2) [a793bfc9-fc37-4711-a144-d74c100cc75b]
> > > Running
> > > > > > > command:
> > > > > > > > > AddManagedBlockStorageDiskCommand internal: true.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'commandCoordinator' is using 1 threads out of 10,
> > > 1
> > > > > > > threads
> > > > > > > > > waiting for tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'default' is using 0 threads out of 1, 5 threads
> > > > > > > waiting for
> > > > > > > > > tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'engine' is using 0 threads out of 500, 9 threads
> > > > > > > waiting for
> > > > > > > > > tasks and 0 tasks in queue.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'engineScheduled' is using 0 threads out of 100,
> > > 100
> > > > > > > threads
> > > > > > > > > waiting for tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'engineThreadMonitoring' is using 1 threads out of
> > > 1,
> > > > > > > 0
> > > > > > > > > threads waiting for tasks.
> > > > > > > > > 2019-07-04 16:48:19,501+01 INFO
> > > > > > > > >
> > > > > > >
> > > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> > > > > > > (EE-
> > > > > > > > > ManagedThreadFactory-engineThreadMonitoring-Thread-1)
> > > []
> > > > > > > Thread
> > > > > > > > > pool 'hostUpdatesChecker' is using 0 threads out of 5,
> > > 1
> > > > > > > threads
> > > > > > > > > waiting for tasks.
> > > > > > > > > _______________________________________________
> > > > > > > > > Users mailing list -- users@ovirt.org
> > > > > > > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > > > > > > Privacy Statement:
> > > > > > > https://www.ovirt.org/site/privacy-policy/
> > > > > > > > > oVirt Code of Conduct:
> > > > > > > > >
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > > > > > > > List Archives:
> > > > > > > > >
> > > > > > >
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VW5AZHXGU2YTSBBYJI4YEUNBSI6CDXUS/
> > > > >
> > > > > The Networking People (TNP) Limited. Registered office: Network
> > > > > House, Caton Rd, Lancaster, LA1 3PE. Registered in England &
> > > Wales
> > > > > with company number: 07667393
> > > > > This email and any files transmitted with it are confidential
> > > and
> > > > > intended solely for the use of the individual or entity to whom
> > > > > they are addressed. If you have received this email in error
> > > please
> > > > > notify the system manager. This message contains confidential
> > > > > information and is intended only for the individual named. If
> > > you
> > > > > are not the named addressee you should not disseminate,
> > > distribute
> > > > > or copy this e-mail. Please notify the sender immediately by e-
> > > mail
> > > > > if you have received this e-mail by mistake and delete this e-
> > > mail
> > > > > from your system. If you are not the intended recipient you are
> > > > > notified that disclosing, copying, distributing or taking any
> > > > > action in reliance on the contents of this information is
> > > strictly
> > > > > prohibited.
> > >
> > > ________________________________
> > >
> > > The Networking People (TNP) Limited. Registered office: Network
> > > House, Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales
> > > with company number: 07667393
> > >
> > > This email and any files transmitted with it are confidential and
> > > intended solely for the use of the individual or entity to whom
> > > they are addressed. If you have received this email in error please
> > > notify the system manager. This message contains confidential
> > > information and is intended only for the individual named. If you
> > > are not the named addressee you should not disseminate, distribute
> > > or copy this e-mail. Please notify the sender immediately by e-mail
> > > if you have received this e-mail by mistake and delete this e-mail
> > > from your system. If you are not the intended recipient you are
> > > notified that disclosing, copying, distributing or taking any
> > > action in reliance on the contents of this information is strictly
> > > prohibited.
>
> ________________________________
>
> The Networking People (TNP) Limited. Registered office: Network House,
> Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company
> number: 07667393
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3TROLOIDD2ZFMIN464EW7JVYHOHBMRM/

Reply via email to