This bug was fixed in the package corosync - 2.4.3-0ubuntu1.3
---------------
corosync (2.4.3-0ubuntu1.3) bionic; urgency=medium
[ Miriam España Acebal ]
* d/libtotem-pg5.symbols: add a postfixed missing symbol,
crypto_get_current_sec_header_size.
[ Jorge Niedbalski ]
* d/control: corosync binary depends on libqb-dev.
(LP: #1677684)
-- Miriam España Acebal <[email protected]> Tue, 28 Sep 2021
20:00:07 +0200
** Changed in: corosync (Ubuntu Bionic)
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1677684
Title:
/usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-
blackbox: not found
Status in corosync package in Ubuntu:
Fix Released
Status in corosync source package in Trusty:
Won't Fix
Status in corosync source package in Xenial:
Won't Fix
Status in corosync source package in Zesty:
Won't Fix
Status in corosync source package in Bionic:
Fix Released
Status in corosync source package in Disco:
Won't Fix
Status in corosync source package in Eoan:
Won't Fix
Status in corosync source package in Focal:
Fix Released
Bug description:
[SRU]
[Impact]
The command corosync-blackbox can't be used in Bionic.
[Test Plan]
1) sudo apt-get install corosync
2) sudo corosync-blackbox.
root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L corosync |grep
black
/usr/bin/corosync-blackbox
Expected results: corosync-blackbox runs OK:
ubuntu@bionic:~/tmp$ sudo corosync-blackbox
Dumping the contents of /var/lib/corosync/fdata
[debug] shm size:8392704; real_size:8392704; rb->word_size:2098176
[debug] read total of: 8392724
Ringbuffer:
->NORMAL
->write_pt [2866]
->read_pt [0]
->size [2098176 words]
=>free [8381236 bytes]
=>used [11464 bytes]
debug Sep 28 20:21:50 totempg_waiting_trans_ack_cb(286):14:
waiting_trans_ack changed to 1
debug Sep 28 20:21:50 totemsrp_initialize(900):14: Token Timeout (3000 ms)
retransmit timeout (294 ms)
debug Sep 28 20:21:50 totemsrp_initialize(903):14: token hold (225 ms)
retransmits before loss (10 retrans)
debug Sep 28 20:21:50 totemsrp_initialize(910):14: join (50 ms) send_join
(0 ms) consensus (3600 ms) merge (200 ms)
debug Sep 28 20:21:50 totemsrp_initialize(913):14: downcheck (1000 ms)
fail to recv const (2500 msgs)
debug Sep 28 20:21:50 totemsrp_initialize(915):14: seqno unchanged const
(30 rotations) Maximum network MTU 1401
debug Sep 28 20:21:50 totemsrp_initialize(919):14: window size per
rotation (50 messages) maximum messages per rotation (17 messages)
debug Sep 28 20:21:50 totemsrp_initialize(923):14: missed count const (5
messages)
debug Sep 28 20:21:50 totemsrp_initialize(926):14: send threads (0 threads)
debug Sep 28 20:21:50 totemsrp_initialize(929):14: RRP token expired
timeout (294 ms)
debug Sep 28 20:21:50 totemsrp_initialize(932):14: RRP token problem
counter (2000 ms)
debug Sep 28 20:21:50 totemsrp_initialize(935):14: RRP threshold (10
problem count)
debug Sep 28 20:21:50 totemsrp_initialize(938):14: RRP multicast threshold
(100 problem count)
debug Sep 28 20:21:50 totemsrp_initialize(941):14: RRP automatic recovery
check timeout (1000 ms)
debug Sep 28 20:21:50 totemsrp_initialize(943):14: RRP mode set to none.
debug Sep 28 20:21:50 totemsrp_initialize(946):14:
heartbeat_failures_allowed (0)
debug Sep 28 20:21:50 totemsrp_initialize(948):14: max_network_delay (50
ms)
debug Sep 28 20:21:50 totemsrp_initialize(971):14: HeartBeat is Disabled.
To enable set heartbeat_failures_allowed > 0
notice Sep 28 20:21:50 totemnet_instance_initialize(248):14: Initializing
transport (UDP/IP Multicast).
notice Sep 28 20:21:50 init_nss(688):14: Initializing transmit/receive
security (NSS) crypto: none hash: none
debug Sep 28 20:21:50 totemudp_build_sockets_ip(923):14: Receive multicast
socket recv buffer size (320000 bytes).
debug Sep 28 20:21:50 totemudp_build_sockets_ip(929):14: Transmit
multicast socket send buffer size (320000 bytes).
debug Sep 28 20:21:50 totemudp_build_sockets_ip(935):14: Local receive
multicast loop socket recv buffer size (320000 bytes).
debug Sep 28 20:21:50 totemudp_build_sockets_ip(941):14: Local transmit
multicast loop socket send buffer size (320000 bytes).
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 2 for
FD 8
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 3 for
FD 9
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 4 for
FD 12
notice Sep 28 20:21:50 timer_function_netif_check_timeout(669):14: The
network interface [127.0.0.1] is now up.
debug Sep 28 20:21:50 main_iface_change_fn(5101):14: Created or loaded
sequence id 8.127.0.0.1 for this ring.
info Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: cmap
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 5 for
FD 13
info Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: cfg
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 6 for
FD 14
info Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: cpg
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 7 for
FD 15
info Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: votequorum
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 8 for
FD 16
info Sep 28 20:21:50 qb_ipcs_us_publish(537):9: server name: quorum
trace Sep 28 20:21:50 qb_loop_poll_add(368):9: grown poll array to 9 for
FD 17
debug Sep 28 20:21:50 memb_state_gather_enter(2222):14: entering GATHER
state from 15(interface change).
debug Sep 28 20:21:50 memb_state_commit_token_create(3274):14: Creating
commit token because I am the rep.
debug Sep 28 20:21:50 old_ring_state_save(1605):14: Saving state aru 0
high seq received 0
debug Sep 28 20:21:50 memb_state_commit_enter(2271):14: entering COMMIT
state.
debug Sep 28 20:21:50 message_handler_memb_commit_token(4929):14: got
commit token
debug Sep 28 20:21:50 memb_state_recovery_enter(2308):14: entering
RECOVERY state.
debug Sep 28 20:21:50 memb_state_recovery_enter(2354):14: position [0]
member 127.0.0.1:
debug Sep 28 20:21:50 memb_state_recovery_enter(2358):14: previous ring
seq 8 rep 127.0.0.1
debug Sep 28 20:21:50 memb_state_recovery_enter(2364):14: aru 0 high
delivered 0 received flag 1
debug Sep 28 20:21:50 memb_state_recovery_enter(2462):14: Did not need to
originate any messages in recovery.
debug Sep 28 20:21:50 message_handler_memb_commit_token(4929):14: got
commit token
debug Sep 28 20:21:50 message_handler_memb_commit_token(4994):14: Sending
initial ORF token
trace Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to
pending queue
debug Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans
flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
debug Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0
aru 0 high seq received 0
debug Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans
flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
debug Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0
aru 0 high seq received 0
debug Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans
flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
debug Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0
aru 0 high seq received 0
debug Sep 28 20:21:50 message_handler_orf_token(4176):14: token retrans
flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
debug Sep 28 20:21:50 message_handler_orf_token(4187):14: install seq 0
aru 0 high seq received 0
debug Sep 28 20:21:50 message_handler_orf_token(4206):14: retrans flag
count 4 token aru 0 install seq 0 aru 0 0
debug Sep 28 20:21:50 old_ring_state_reset(1621):14: Resetting old ring
state
debug Sep 28 20:21:50 deliver_messages_from_recovery_to_regular(1852):14:
recovery to regular 1-0
trace Sep 28 20:21:50 memb_state_operational_enter(1943):14: Delivering to
app 1 to 0
debug Sep 28 20:21:50 totempg_waiting_trans_ack_cb(286):14:
waiting_trans_ack changed to 1
debug Sep 28 20:21:50 memb_state_operational_enter(2128):14: entering
OPERATIONAL state.
notice Sep 28 20:21:50 memb_state_operational_enter(2134):14: A new
membership (127.0.0.1:12) was formed. Members joined: 2130706433
trace Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to
pending queue
trace Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 0 to 2
trace Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST
message with seq 1 to pending delivery queue
trace Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST
message with seq 2 to pending delivery queue
trace Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to
pending queue
trace Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 2 to 3
trace Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST
message with seq 3 to pending delivery queue
trace Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to
pending queue
trace Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to
pending queue
trace Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and
including 2
trace Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 3 to 5
trace Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST
message with seq 4 to pending delivery queue
trace Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST
message with seq 5 to pending delivery queue
trace Sep 28 20:21:50 totemsrp_mcast(2547):14: mcasted message added to
pending queue
trace Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and
including 3
trace Sep 28 20:21:50 messages_deliver_to_app(4278):14: Delivering 5 to 6
trace Sep 28 20:21:50 messages_deliver_to_app(4347):14: Delivering MCAST
message with seq 6 to pending delivery queue
debug Sep 28 20:21:50 totempg_waiting_trans_ack_cb(286):14:
waiting_trans_ack changed to 0
trace Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and
including 5
trace Sep 28 20:21:50 messages_free(2676):14: releasing messages up to and
including 6
trace Sep 28 20:22:03 qb_loop_poll_add(368):9: grown poll array to 10 for
FD 18
debug Sep 28 20:22:03 handle_new_connection(647):9: IPC credentials
authenticated (3202-3255-18)
debug Sep 28 20:22:03 qb_ipcs_shm_connect(285):9: connecting to client
[3255]
debug Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589;
real_size:1052672; rb->word_size:263168
debug Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589;
real_size:1052672; rb->word_size:263168
debug Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589;
real_size:1052672; rb->word_size:263168
trace Sep 28 20:22:03 qb_loop_poll_add(368):9: grown poll array to 11 for
FD 18
debug Sep 28 20:22:03 qb_ipcs_dispatch_connection_request(759):9: HUP conn
(3202-3255-18)
debug Sep 28 20:22:03 qb_ipcs_disconnect(606):9:
qb_ipcs_disconnect(3202-3255-18) state:2
trace Sep 28 20:22:03 qb_rb_close(290):9: ENTERING qb_rb_close()
debug Sep 28 20:22:03 qb_rb_close_helper(337):9: Free'ing ringbuffer:
/dev/shm/qb-cmap-response-3202-3255-18-header
trace Sep 28 20:22:03 my_posix_sem_destroy(91):9: ENTERING
my_posix_sem_destroy()
trace Sep 28 20:22:03 qb_rb_close(290):9: ENTERING qb_rb_close()
debug Sep 28 20:22:03 qb_rb_close_helper(337):9: Free'ing ringbuffer:
/dev/shm/qb-cmap-event-3202-3255-18-header
trace Sep 28 20:22:03 my_posix_sem_destroy(91):9: ENTERING
my_posix_sem_destroy()
trace Sep 28 20:22:03 qb_rb_close(290):9: ENTERING qb_rb_close()
debug Sep 28 20:22:03 qb_rb_close_helper(337):9: Free'ing ringbuffer:
/dev/shm/qb-cmap-request-3202-3255-18-header
trace Sep 28 20:22:03 my_posix_sem_destroy(91):9: ENTERING
my_posix_sem_destroy()
debug Sep 28 20:22:03 handle_new_connection(647):9: IPC credentials
authenticated (3202-3257-18)
debug Sep 28 20:22:03 qb_ipcs_shm_connect(285):9: connecting to client
[3257]
debug Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589;
real_size:1052672; rb->word_size:263168
debug Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589;
real_size:1052672; rb->word_size:263168
debug Sep 28 20:22:03 qb_rb_open_2(238):9: shm size:1048589;
real_size:1052672; rb->word_size:263168
ERROR: qb_rb_chunk_read failed: Connection timed out
[trace] ENTERING qb_rb_close()
[debug] Free'ing ringbuffer: /dev/shm/qb-create_from_file-header
Current results:
$ sudo corosync-blackbox
/usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not
found
[Where problems could occur]
The reverse dependencies of the B-D introduced has been analized:
- It only affects to corosync source:
$ apt rdepends libqb-dev
libqb-dev
Reverse Depends:
Depends: libcorosync-common-dev
Depends: libcorosync-common-dev
Depends: libcorosync-common-dev
- It's present in Bionic:
ubuntu@bionic-corosync-sysv:/etc/init.d$ apt-cache policy libqb-dev
libqb-dev:
Installed: 1.0.1-1ubuntu1
Candidate: 1.0.1-1ubuntu1
Version table:
*** 1.0.1-1ubuntu1 500
500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
ubuntu@bionic-corosync-sysv:/etc/init.d$ apt depends libqb-dev
libqb-dev
PreDepends: dpkg (>= 1.17.14)
Depends: libqb0 (= 1.0.1-1ubuntu1)
Depends: libc6 (>= 2.3.4)
Suggests: libqb-doc
No other packages will be affected by this if a libqb-dev removal occurs.
[Other Info]
This solution follows the example of the same bug fixed in Focal at
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1869622 .
[Original Description]
-----------------------------------
[Environment]
Ubuntu Xenial 16.04
Amd64
[Test Case]
1) sudo apt-get install corosync
2) sudo corosync-blackbox.
root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L corosync |grep
black
/usr/bin/corosync-blackbox
Expected results: corosync-blackbox runs OK.
Current results:
$ sudo corosync-blackbox
/usr/bin/corosync-blackbox: 34: /usr/bin/corosync-blackbox: qb-blackbox: not
found
[Impact]
* Cannot run corosync-blackbox
[Regression Potential]
* None identified.
[Fix]
Make the package dependant of libqb-dev
root@juju-niedbalski-xenial-machine-5:/home/ubuntu# dpkg -L libqb-dev | grep
qb-bl
/usr/sbin/qb-blackbox
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/corosync/+bug/1677684/+subscriptions
_______________________________________________
Mailing list: https://launchpad.net/~group.of.nepali.translators
Post to : [email protected]
Unsubscribe : https://launchpad.net/~group.of.nepali.translators
More help : https://help.launchpad.net/ListHelp