On Fri, Jul 21, 2017 at 2:16 PM, danielos <[email protected]> wrote:

>
> Actually I found something (whole guacd log attached to e-mail message):
>
> guacd[1]: INFO: Connection "$5ac08735-d767-419f-997a-a80e74f6704b"
> removed.
> guacd[1]: INFO: Creating new client for protocol "rdp"
> guacd[1]: INFO: Connection ID is "$7293ee8c-133f-4844-af8f-ce124b9693e3"
> guacd[289]: INFO:       No security mode specified. Defaulting to RDP.
> guacd[289]: INFO:       Resize method: none
> guacd[289]: INFO:       User "@09c1c2a7-7554-4453-85a1-19dda4c8cd96"
> joined connection "$7293ee8c-133f-4844-af8f-ce124b9693e3" (1 users now
> present)
> guacd[289]: ERROR:      Password authentication failed: Authentication
> failed (username/password)
> guacd[289]: INFO:       User "@09c1c2a7-7554-4453-85a1-19dda4c8cd96"
> disconnected (0 users remain)
> guacd[289]: INFO:       Last user of connection 
> "$7293ee8c-133f-4844-af8f-ce124b9693e3"
> disconnected
> *** Error in `/usr/local/sbin/guacd': free(): invalid pointer:
> 0x00007fb74c00b5a0 ***
> ======= Backtrace: =========
> /lib64/libc.so.6(+0x7c503)[0x7fb763e51503]
> /usr/local/lib/libguac-client-rdp.so(guac_common_ssh_destroy_user+0x23)[
> 0x7fb75c4117d3]
> /usr/local/lib/libguac-client-rdp.so(guac_rdp_client_free_
> handler+0x61)[0x7fb75c405ba1]
> /usr/local/lib/libguac.so.12(guac_client_free+0x32)[0x7fb7658bfb72]
> /usr/local/sbin/guacd[0x404310]
> /usr/local/sbin/guacd[0x403a80]
> /lib64/libpthread.so.0(+0x7dc5)[0x7fb764efddc5]
> /lib64/libc.so.6(clone+0x6d)[0x7fb763ecc73d]
> ======= Memory map: ========
> 00400000-00407000 r-xp 00000000 00:26 23
> /usr/local/sbin/guacd
> 00606000-00607000 r--p 00006000 00:26 23
> /usr/local/sbin/guacd
> 00607000-00608000 rw-p 00007000 00:26 23
> /usr/local/sbin/guacd
> 00d6b000-0158a000 rw-p 00000000 00:00 0
> [heap]
>
>
That particular issue is:

https://issues.apache.org/jira/browse/GUACAMOLE-194

which should be fixed on the current 0.9.13-incubating release candidate. I
don't think this would result in guacd spinning and eating 100% CPU, but
it's worth trying the "0.9.13-incubating-RC1" tag of the Docker images to
see if the issue remains reproducible.

Are you able to reproduce this outside of Docker?

- Mike

Reply via email to