Package: pipewire
Version: 0.3.70-1
Severity: important
Dear Maintainer,
I upgraded pipewire to 0.3.70 on boht the pipewire-pulse client and server side
of the native "pulseaudio" tcp tunnel.
I restart pipewire and pipewire-pulse user systemd services and then the client
box pipewire-pulse started constantly
crashing and reloading.
systemctl --user status pipewire-pulse
× pipewire-pulse.service - PipeWire PulseAudio
Loaded: loaded (/usr/lib/systemd/user/pipewire-pulse.service; enabled;
preset: enabled)
Active: failed (Result: core-dump) since Sun 2023-04-30 02:07:45 CEST; 10s
ago
Duration: 1.593s
TriggeredBy: ○ pipewire-pulse.socket
Process: 2254260 ExecStart=/usr/bin/pipewire-pulse (code=dumped,
signal=SEGV)
Main PID: 2254260 (code=dumped, signal=SEGV)
CPU: 1.175s
avril 30 02:07:41 cyclope systemd[7728]: Started pipewire-pulse.service -
PipeWire PulseAudio.
avril 30 02:07:41 cyclope pipewire-pulse[2254260]: mod.pulse-tunnel: failed to
connect: Connection refused
avril 30 02:07:41 cyclope pipewire-pulse[2254260]: mod.zeroconf-discover: Can't
load module: Connexion refusée
avril 30 02:07:43 cyclope systemd[7728]: Stopping pipewire-pulse.service -
PipeWire PulseAudio...
avril 30 02:07:45 cyclope systemd-coredump[2254277]: Process 2254260
(pipewire-pulse) of user 1000 dumped core.
Stack trace of thread
2254260:
#0 0x00007f58a57d0ea0
pw_impl_module_schedule_destroy (libpipewire-0.3.so.0 + 0x72ea0)
#1 0x00007f58a4c64b0d
do_schedule_destroy (libpipewire-module-pulse-tunnel.so + 0x5b0d)
#2 0x00007f58a5885a4f
flush_items (libspa-support.so + 0x6a4f)
#3 0x00007f58a5885899
source_event_func (libspa-support.so + 0x6899)
#4 0x00007f58a5887f9e
loop_iterate (libspa-support.so + 0x8f9e)
#5 0x00007f58a57cd78b
pw_main_loop_run (libpipewire-0.3.so.0 + 0x6f78b)
#6 0x00005590e0b4c457
main (pipewire + 0x1457)
#7 0x00007f58a55a418a
__libc_start_call_main (libc.so.6 + 0x2718a)
#8 0x00007f58a55a4245
__libc_start_main_impl (libc.so.6 + 0x27245)
#9 0x00005590e0b4c5f1
_start (pipewire + 0x15f1)
Stack trace of thread
2254264:
#0 0x00007f58a5685c06
epoll_wait (libc.so.6 + 0x108c06)
#1 0x00007f58a5896ec0
impl_pollfd_wait (libspa-support.so + 0x17ec0)
#2 0x00007f58a5887e1b
loop_iterate (libspa-support.so + 0x8e1b)
#3 0x00007f58a57a7f6c
do_loop (libpipewire-0.3.so.0 + 0x49f6c)
#4 0x00007f58a5605fd4
start_thread (libc.so.6 + 0x88fd4)
#5 0x00007f58a5685820
__clone (libc.so.6 + 0x108820)
Stack trace of thread
2254269:
#0 0x00007f58a5678fff
__GI___poll (libc.so.6 + 0xfbfff)
#1 0x00007f58a3ec02e1 n/a
(libpulse.so.0 + 0x342e1)
#2 0x00007f58a3eb1fa4
pa_mainloop_poll (libpulse.so.0 + 0x25fa4)
#3 0x00007f58a3eb2606
pa_mainloop_iterate (libpulse.so.0 + 0x26606)
#4 0x00007f58a3eb26b0
pa_mainloop_run (libpulse.so.0 + 0x266b0)
#5 0x00007f58a3ec03b9 n/a
(libpulse.so.0 + 0x343b9)
#6 0x00007f58a3e6133f n/a
(libpulsecommon-16.1.so + 0x5b33f)
#7 0x00007f58a5605fd4
start_thread (libc.so.6 + 0x88fd4)
#8 0x00007f58a5685820
__clone (libc.so.6 + 0x108820)
Stack trace of thread
2254275:
#0 0x00007f58a5678fff
__GI___poll (libc.so.6 + 0xfbfff)
#1 0x00007f58a3ec02e1 n/a
(libpulse.so.0 + 0x342e1)
#2 0x00007f58a3eb1fa4
pa_mainloop_poll (libpulse.so.0 + 0x25fa4)
#3 0x00007f58a3eb2606
pa_mainloop_iterate (libpulse.so.0 + 0x26606)
#4 0x00007f58a3eb26b0
pa_mainloop_run (libpulse.so.0 + 0x266b0)
#5 0x00007f58a3ec03b9 n/a
(libpulse.so.0 + 0x343b9)
#6 0x00007f58a3e6133f n/a
(libpulsecommon-16.1.so + 0x5b33f)
#7 0x00007f58a5605fd4
start_thread (libc.so.6 + 0x88fd4)
#8 0x00007f58a5685820
__clone (libc.so.6 + 0x108820)
Stack trace of thread
2254271:
#0 0x00007f58a5678fff
__GI___poll (libc.so.6 + 0xfbfff)
#1 0x00007f58a3ec02e1 n/a
(libpulse.so.0 + 0x342e1)
#2 0x00007f58a3eb1fa4
pa_mainloop_poll (libpulse.so.0 + 0x25fa4)
#3 0x00007f58a3eb2606
pa_mainloop_iterate (libpulse.so.0 + 0x26606)
#4 0x00007f58a3eb26b0
pa_mainloop_run (libpulse.so.0 + 0x266b0)
#5 0x00007f58a3ec03b9 n/a
(libpulse.so.0 + 0x343b9)
#6 0x00007f58a3e6133f n/a
(libpulsecommon-16.1.so + 0x5b33f)
#7 0x00007f58a5605fd4
start_thread (libc.so.6 + 0x88fd4)
#8 0x00007f58a5685820
__clone (libc.so.6 + 0x108820)
ELF object binary
architecture: AMD x86-64
avril 30 02:07:45 cyclope systemd[7728]: pipewire-pulse.service: Main process
exited, code=dumped, status=11/SEGV
avril 30 02:07:45 cyclope systemd[7728]: pipewire-pulse.service: Failed with
result 'core-dump'.
avril 30 02:07:45 cyclope systemd[7728]: Stopped pipewire-pulse.service -
PipeWire PulseAudio.
avril 30 02:07:45 cyclope systemd[7728]: pipewire-pulse.service: Consumed
1.175s CPU time.
(gdb) bt
#0 pw_impl_module_schedule_destroy (module=0x65756c6222203a22) at
../src/pipewire/impl-module.c:403
#1 0x00007f58a4c64b0d in do_schedule_destroy
(loop=<optimized out>, async=<optimized out>, seq=<optimized out>,
data=<optimized out>, size=<optimized out>, user_data=<optimized out>)
at ../src/modules/module-pulse-tunnel.c:503
#2 0x00007f58a5885a4f in flush_items (impl=0x5590e270be78) at
../spa/plugins/support/loop.c:171
#3 0x00007f58a5885899 in source_event_func (source=0x5590e2713fa0) at
../spa/plugins/support/loop.c:602
#4 0x00007f58a5887f9e in loop_iterate (object=0x5590e270be78, timeout=-1) at
../spa/plugins/support/loop.c:439
#5 0x00007f58a57cd78b in pw_main_loop_run (loop=loop@entry=0x5590e270bd20) at
../src/pipewire/main-loop.c:128
#6 0x00005590e0b4c457 in main (argc=<optimized out>, argv=<optimized out>) at
../src/daemon/pipewire.c:111
(gdb) bt full
#0 pw_impl_module_schedule_destroy (module=0x65756c6222203a22) at
../src/pipewire/impl-module.c:403
impl = 0x65756c6222203a22
#1 0x00007f58a4c64b0d in do_schedule_destroy
(loop=<optimized out>, async=<optimized out>, seq=<optimized out>,
data=<optimized out>, size=<optimized out>, user_data=<optimized out>)
at ../src/modules/module-pulse-tunnel.c:503
impl = <optimized out>
#2 0x00007f58a5885a4f in flush_items (impl=0x5590e270be78) at
../spa/plugins/support/loop.c:171
item = 0x5590e270bfe8
block = false
func = <optimized out>
index = 128
flush_count = 2
avail = 128
res = <optimized out>
__func__ = "flush_items"
#3 0x00007f58a5885899 in source_event_func (source=0x5590e2713fa0) at
../spa/plugins/support/loop.c:602
s = 0x5590e2713fa0
count = 2
res = <optimized out>
__func__ = "source_event_func"
#4 0x00007f58a5887f9e in loop_iterate (object=0x5590e270be78, timeout=-1) at
../spa/plugins/support/loop.c:439
s = <optimized out>
__cancel_buf =
{__cancel_jmp_buf = {{__cancel_jmp_buf = {94080762691360,
7449369256737684004, 4294967295, 94080733604928, 94080733604920,
140018711351328, 3727605051411184164, 3670985854614682148}, __mask_was_saved =
0}}, __pad = {0x7ffdd01a78b0, 0x0, 0x7f58a580d5a9, 0x5590e275ef80}}
__cancel_routine = 0x7f58a58853d0 <cancellation_handler>
__cancel_arg = <optimized out>
__not_first_call = <optimized out>
impl = 0x5590e270be78
ep =
{{events = 1, data = 0x5590e275d9d0}, {events = 1, data =
0x5590e278ad90}, {events = 1, data = 0x5590e275a440}, {events = 1, data =
0x5590e2713fa0}, {events = 1, data = 0x5590e271fde0}, {events = 2, data =
0x5590e2760470}, {events = 3799385184, data = 0x7f58a5615799
<__GI___libc_malloc+153>}, {events = 3799379616, data = 0x7892cb34fcc2d200},
{events = 2776733779, data = 0xe}, {events = 3799385120, data =
0x7f58a5819853}, {events = 3799385200, data = 0x5590e2760420}, {events = 2,
data = 0x7f58a57f3b3b <do_replace+411>}, {events = 3799385440, data =
0x5590e2760660}, {events = 2775907424, data = 0xc}, {events = 2775907440, data
= 0x5590e275ef90}, {events = 2776733779, data = 0xa}, {events = 3799258704,
data = 0x5590e2714408}, {events = 0, data = 0x0}, {events = 0, data = 0x0},
{events = 0, data = 0x7892cb34fcc2d200}, {events = 0, data = 0x5590e275f080},
{events = 3799073040, data = 0xa}, {events = 3799073040, data =
0x7f58a580e960}, {events = 3799073040, data = 0x7f58a57e0c48
<pw_impl_metadata_register+200>}, {events = 3799073040, data = 0x5590e275ef20},
{events = 3799073040, data = 0x7f58a57f8f9c <pw_settings_expose+108>}, {events
= 3799073040, data = 0x7f58a58582d8 <log_context>}, --Type <RET> for more, q to
quit, c to continue without paging--
{events = 2776689160, data = 0x7f58a57a0bb7 <pw_context_new+2663>}, {events =
3799073040, data = 0x7f58a57a0b84 <pw_context_new+2612>}, {events = 3799075328,
data = 0x559000000009}, {events = 2776688730, data = 0x7f58a58853b4
<loop_add_source+84>}, {events = 3799039608, data = 0xf}}
e = <optimized out>
i = 3
nfds = 5
cdata = {ep = 0x7ffdd01a65f0, ep_count = 5}
__func__ = "loop_iterate"
#5 0x00007f58a57cd78b in pw_main_loop_run (loop=loop@entry=0x5590e270bd20) at
../src/pipewire/main-loop.c:128
_f = <optimized out>
_res = -95
_o = <optimized out>
res = 0
__func__ = "pw_main_loop_run"
#6 0x00005590e0b4c457 in main (argc=<optimized out>, argv=<optimized out>) at
../src/daemon/pipewire.c:111
context = 0x5590e2714110
loop = 0x5590e270bd20
properties = 0x0
long_options =
{{name = 0x5590e0b4d08f "help", has_arg = 0, flag = 0x0, val =
104}, {name = 0x5590e0b4d094 "version", has_arg = 0, flag = 0x0, val = 86},
{name = 0x5590e0b4d09c "config", has_arg = 1, flag = 0x0, val = 99}, {name =
0x5590e0b4d0a3 "verbose", has_arg = 0, flag = 0x0, val = 118}, {name = 0x0,
has_arg = 0, flag = 0x0, val = 0}}
c = <optimized out>
res = 0
path =
"/usr/bin/pipewire-pulse.conf\000\000\000\000p\003\000\000\000\000\000\000p\003\000\000\000\000\000\000D\000\000\000\000\000\000\000D\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\a\000\000\000\004\000\000\000\320\350\034\000\000\000\000\000\320\350\034\000\000\000\000\000\320\350\034\000\000\000\000\000\020\000\000\000\000\000\000\000\220\000\000\000\000\000\000\000\b\000\000\000\000\000\000\000S\345td\004\000\000\000P\003\000\000\000\000\000\000P\003\000\000\000\000\000\000P\003\000\000\000\000\000\000
\000\000\000\000\000\000\000
\000\000\000\000\000\000\000\b\000\000\000\000\000\000\000P\345td\004\000\000\000\254\n\032\000\000\000\000\000"...
config_name = <optimized out>
level = <optimized out>
__func__ = "main"
(gdb) l
398 SPA_EXPORT
399 void pw_impl_module_schedule_destroy(struct pw_impl_module *module)
400 {
401 struct impl *impl = SPA_CONTAINER_OF(module, struct impl, this);
402
403 if (impl->destroy_work_id != SPA_ID_INVALID)
404 return;
405
406 impl->destroy_work_id =
pw_work_queue_add(pw_context_get_work_queue(module->context),
407 module, 0,
do_destroy_module, NULL);
(gdb) p impl
$1 = (struct impl *) 0x65756c6222203a22
(gdb) p *impl
Cannot access memory at address 0x65756c6222203a22
(gdb) up
#1 0x00007f58a4c64b0d in do_schedule_destroy (loop=<optimized out>,
async=<optimized out>, seq=<optimized out>, data=<optimized out>,
size=<optimized out>,
user_data=<optimized out>) at ../src/modules/module-pulse-tunnel.c:503
503 pw_impl_module_schedule_destroy(impl->module);
(gdb) l
498 static int
499 do_schedule_destroy(struct spa_loop *loop,
500 bool async, uint32_t seq, const void *data, size_t size, void
*user_data)
501 {
502 struct impl *impl = user_data;
503 pw_impl_module_schedule_destroy(impl->module);
504 return 0;
505 }
506
507 void module_schedule_destroy(struct impl *impl)
(gdb) p *impl
value has been optimized out
(gdb) up
#2 0x00007f58a5885a4f in flush_items (impl=0x5590e270be78) at
../spa/plugins/support/loop.c:171
171 item->res = func(&impl->loop, true, item->seq,
item->data,
(gdb) l
166 * calls don't call the callback again. We can't update
the
167 * read index before we call the function because then
the item
168 * might get overwritten. */
169 item->func = NULL;
170 if (func)
171 item->res = func(&impl->loop, true, item->seq,
item->data,
172 item->size, item->user_data);
173
174 /* if this function did a recursive invoke, it now
flushed the
175 * ringbuffer and we can exit */
(gdb) p *impl
$2 = {handle = {version = 0, get_interface = 0x7f58a58888e0
<impl_get_interface>, clear = 0x7f58a5886e70 <impl_clear>}, loop = {iface = {
type = 0x7f58a5898498 "Spa:Pointer:Interface:Loop", version = 0, cb =
{funcs = 0x7f58a589e800 <impl_loop>, data = 0x5590e270be78}}}, control = {iface
= {
type = 0x7f58a5898970 "Spa:Pointer:Interface:LoopControl", version = 1,
cb = {funcs = 0x7f58a589e7c0 <impl_loop_control>, data = 0x5590e270be78}}},
utils = {iface = {
type = 0x7f58a5898998 "Spa:Pointer:Interface:LoopUtils", version = 0, cb
= {funcs = 0x7f58a589e760 <impl_loop_utils>, data = 0x5590e270be78}}}, log =
0x5590e270ba50,
system = 0x5590e270bde0, source_list = {next = 0x5590e2817a88, prev =
0x5590e2713fd8}, destroy_list = {next = 0x5590e270bf10, prev = 0x5590e270bf10},
hooks_list = {
list = {next = 0x5590e270bf20, prev = 0x5590e270bf20}}, poll_fd = 4, thread
= 140018707834688, enter_count = 1, wakeup = 0x5590e2713fa0, ack_fd = 6, buffer
= {
readindex = 128, writeindex = 256}, buffer_data = 0x5590e270bf68 "@",
buffer_mem = "@", '\000' <repeats 15 times>,
"\001\000\000\000\000\000\000\000\250\277p\342\220U", '\000' <repeats 18
times>, "`Mw\342\220U\000\000\000\000\000\000\000\000\000\000@", '\000'
<repeats 15 times>, "\001\000\000\000\000\000\000\000\350\277p\342\220U",
'\000' <repeats 18 times>,
"\240\254{\342\220U\000\000\000\000\000\000\000\000\000\000@", '\000' <repeats
15 times>, "\001\000\000\000\000\000\000\000(\300p\342\220U", '\000' <repeats
18 times>,
"P\020\200\342\220U\000\000\000\000\000\000\000\000\000\000@\000\000\000\000\000\000\000"...,
flush_count = 2, polling = 0}
On the client box disabling
~/.config/pipewire/pipewire-pulse.conf.d/pulse-native-tcp-client.conf
"
pulse.cmd = [
{ cmd = "load-module", args = "module-zeroconf-discover" }
]
"
avoid the pipewire-pulse crash.
I tried upstream master and it seems to fix the issue (but no release yet).
Do you plan on releasing 0.3.70 to unstable? If so I will try to find the
master branch commit that fixed the issue.
Cheers,
Alban
-- System Information:
Debian Release: 11.7
APT prefers stable-updates
APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500,
'stable-debug'), (500, 'stable'), (500, 'oldstable'), (100, 'testing-debug'),
(100, 'testing'), (90, 'unstable-debug'), (90, 'unstable'), (1,
'experimental-debug'), (1, 'experimental')
merged-usr: no
Architecture: amd64 (x86_64)
Foreign Architectures: i386
Kernel: Linux 6.1.0-7-amd64 (SMP w/4 CPU threads; PREEMPT)
Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=fr_FR.UTF-8, LC_CTYPE=fr_FR.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /bin/bash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
Versions of packages pipewire depends on:
ii adduser 3.118
ii init-system-helpers 1.60
ii libpipewire-0.3-modules 0.3.70-1
ii pipewire-bin 0.3.70-1
pipewire recommends no packages.
pipewire suggests no packages.
-- no debconf information
_______________________________________________
Pkg-utopia-maintainers mailing list
[email protected]
https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-utopia-maintainers