Bug ID: 13019
           Summary: SSL decryption memory leak when using large key log
           Product: Wireshark
           Version: 2.2.1
          Hardware: x86-64
                OS: Windows 10
            Status: UNCONFIRMED
          Severity: Normal
          Priority: Low
         Component: Dissection engine (libwireshark)

Build Information:
Compiled (64-bit) with Qt 5.6.1, with WinPcap (4_1_3), with GLib 2.42.0, with
zlib 1.2.8, with SMI 0.4.8, with c-ares 1.12.0, with Lua 5.2.4, with GnuTLS
3.2.15, with Gcrypt 1.6.2, with MIT Kerberos, with GeoIP, with QtMultimedia,
with AirPcap.

Running on 64-bit Windows 10, build 14393, with locale English_United
States.1252, with WinPcap version 4.1.3 (packet.dll version, based
on libpcap version 1.0 branch 1_0_rel0b (20091008), with GnuTLS 3.2.15, with
Gcrypt 1.6.2, without AirPcap.
Intel(R) Core(TM) i5-4670K CPU @ 3.40GHz (with SSE4.2), with 8121MB of physical

When wireshark attempts to decrypt SSL packets in a capture and the specified
SSL log file is big (>100MB) Wireshark uses excessive (>4GB) memory to dissect
the packet capture.

Steps to reproduce:

1. Set SSL key log file to large file (125M) (yes, I've had key logging running
for a while!)

2. Start a live capture or open a PCAP file that contains ssl packets. My test
file was about 2MB/5000 packets.


- If the dissection completes (it takes a while due to the insane amount of
paging/io being done) the memory usage returns to normal and the UI works
perfectly fine.

- If I cancel the load process midway through, the memory stays at the elevated

- I think the problem lies in/around libwireshark:ssl_load_keyfile(). This
seems to be called for every single packet. Could SSL keys possibly be cached
in memory, especially for non-live dissections?

- I have no idea if this would be useful, but all this data seems to be
allocated in 8MB chunks. Does that just have something to do with how heap
allocation works?

You are receiving this mail because:
You are watching all bug changes.
Sent via:    Wireshark-bugs mailing list <>

Reply via email to