Found, fixed, and pushed. https://github.com/TurboVNC/turbovnc/commit/db1d9bb8cfd808a5882006ac658110211c60aeed
Upgrade your server to the 2.2.x stable pre-release RPM from https://turbovnc.org/DeveloperInfo/PreReleases, and you should be good. 2.2.3 will be officially released later this month. On 7/31/19 1:31 PM, Richard Ems wrote: > Great, was already creating the user and setting things up ... standing by. > > On Wed, 31 Jul 2019 at 15:28, DRC <[email protected] > <mailto:[email protected]>> wrote: > > Stand by. I might have just been able to reproduce it using x11perf. > > > On 7/31/19 1:11 PM, Richard Ems wrote: >> Hi DRC, >> >> yes, I reproduced the leak while valgrind was running. I couldn't >> find any more output that a couple of lines saying valgrind had >> started on the log file. >> I can provide you with ssh access. Then you could start your own >> VNC and tunnel through ssh, would that be ok for you to test yourself? >> Or if you built Xvnc with LSan and can provide me with that binary >> then I could try to run it here. >> >> Thanks, >> Richard >> >> On Wed, 31 Jul 2019 at 13:34, DRC <[email protected] >> <mailto:[email protected]>> wrote: >> >> So you reproduced the leak while valgrind was running? If so, >> then I don't understand why it didn't catch it. The only >> other idea I have is to try building Xvnc with LeakSanitizer >> (which is in clang) and see if it can catch the leak in real time. >> >> If I can actually see the leak and find out where it is in the >> code, it's probably a one-line fix. But without the ability >> to reproduce it myself, I'm not sure what else to do. If you >> can provide remote access to a machine that reproduces the >> problem, as well as reliable reproduction steps, I'm happy to >> look into it. >> >> I will try building Xvnc with LSan on my end and see if I can >> spot any problems using some test programs. >> >> >> On 7/31/19 11:09 AM, Richard Ems wrote: >>> Hi DRC, >>> >>> I started Xvnc with valgrind --leak-check=full , but got no >>> error/leak messages from valgrind in the log file. >>> >>> I then contacted the starccm+ support, and they asked me to >>> try to reproduce using TightVNC, which I probably should have >>> tried first. >>> I then installed the tigervnc-server package from RHEL7 and >>> started that vncserver. I cannot reproduce the memory leak >>> anymore on this Xvnc version. >>> >>> So I am back to assuming that the issue is on the Xvnc >>> version from TurboVNC :) >>> What do you think? >>> Any proposals on what I can do next? >>> >>> Many thanks, >>> Richard >>> >>> >>> On Tue, 9 Jul 2019 at 21:08, DRC <[email protected] >>> <mailto:[email protected]>> wrote: >>> >>> Since you can reproduce the problem, I would recommend >>> running the Xvnc process with 'valgrind >>> --leak-check=full' (the easiest way to do that is to >>> modify /opt/TurboVNC/bin/vncserver.) That should detect >>> whether there are any leaks in Xvnc. Your description >>> suggests strongly that the issue is with StarCCM+, and if >>> it goes away when that application is closed, then that >>> would suggest more strongly that the application is to >>> blame. If it appears that valgrind is detecting >>> something, then send me the output. >>> >>> DRC >>> >>> On 7/9/19 6:24 PM, Richard Ems wrote: >>>> Hi DRC, >>>> >>>> I've found a way to reproduce the memory leak now: >>>> >>>> 1. I start from scratch restarting the Xvnc session >>>> 2. then I start starccm+, load a simulation file and run >>>> it for some iterations >>>> 3. I stop the simulation, close the simulation BUT LEAVE >>>> starccm+ OPEN, without any loaded simulation file >>>> 4. I monitor the Xvnc memory usage with "ps u `pidof` >>>> and can see that the virtual memory size starts >>>> increasing about 10 secs after having closed the >>>> simulation, and doesn't seem to stop as long as starccm+ >>>> is open (without any file loaded) >>>> >>>> Does that help anyhow? >>>> Very strange behaviour. >>>> I will try with other StarCCM+ versions. >>>> >>>> Thanks, >>>> Richard >>>> >>>> >>>> >>>> >>>> >>>> On Mon, 8 Jul 2019 at 17:44, DRC <[email protected] >>>> <mailto:[email protected]>> wrote: >>>> >>>> OK, thanks. You might get some traction by building >>>> Xvnc using clang's AddressSanitizer and >>>> LeakSanitizer. Without a specific procedure to >>>> reproduce the problem, there is unfortunately little >>>> I can do. What I will do, however, is double check >>>> the X.org commit log and see if there are any >>>> leak-related patches that I need to back-port into >>>> the TurboVNC Server. >>>> >>>> libjpeg-turbo-official has never been necessary for >>>> our run-time packages. It is only necessary when >>>> building VirtualGL or TurboVNC. >>>> >>>> >>>> On 7/8/19 3:24 PM, Richard Ems wrote: >>>>> Hi DRC, >>>>> >>>>> thanks for your answer. >>>>> >>>>> Good to know installing libjpeg-turbo-official is >>>>> not needed. I removed it now. I think it was needed >>>>> some time ago, or not? >>>>> >>>>> Yes, this is reproducible with TurboVNC 2.2.1. We >>>>> were seeing the high memory usage on TurboVNC 2.2.1 >>>>> and that's why I updated to 2.2.2 hoping the issue >>>>> would go away. But the issue seems to be the same >>>>> on both versions, 2.2.1 and 2.2.2. >>>>> >>>>> And no, I am not sure it can only be reproduced >>>>> with StarCCM+ (yes, right "guess" ;-) ) . I did not >>>>> try to reproduce it with other software. I even >>>>> couldn't reproduce it with StarCCM+, but not sure >>>>> what users do over the days to end up with this >>>>> high mem usage. >>>>> >>>>> Thanks for the explanations and for looking into it. >>>>> >>>>> Cheers, >>>>> Richard >>>>> >>>>> >>>>> On Mon, 8 Jul 2019 at 17:01, DRC <[email protected] >>>>> <mailto:[email protected]>> wrote: >>>>> >>>>> Unsure what is going on. What I can tell you >>>>> is that: >>>>> >>>>> - Yes, that memory usage is too high. >>>>> >>>>> - The problem is with TurboVNC, not VirtualGL. >>>>> If it were a leak in VirtualGL, then the memory >>>>> usage would rise in the 3D application process, >>>>> not in the Xvnc process. The memory leaks that >>>>> I fixed in VirtualGL 2.6.2 truly were minor-- >>>>> like so minor that they would have gone >>>>> unnoticed unless you ran VirtualGL through >>>>> valgrind (which is how I detected them.) The >>>>> leaks mostly took the form of memory that was >>>>> not properly freed at shutdown, so for the most >>>>> part, they didn't grow the memory usage of >>>>> VirtualGL while the 3D application was >>>>> running. I fixed them primarily to make it >>>>> easier to detect more serious leaks, if they >>>>> are introduced in the future. >>>>> >>>>> Note that, if you are using the official >>>>> VirtualGL and TurboVNC packages, those packages >>>>> statically link with a specific version of >>>>> libjpeg-turbo, so installing >>>>> libjpeg-turbo-official is unnecessary, and the >>>>> installed version of that package is not >>>>> relevant for diagnostic purposes. >>>>> >>>>> I checked the diff between TurboVNC 2.2.1 and >>>>> 2.2.2 and didn't see any obvious areas of >>>>> concern. Some follow-up questions: >>>>> >>>>> - Are you sure that this isn't reproducible >>>>> with TurboVNC 2.2.1? >>>>> >>>>> - Are you sure that it can only be reproduced >>>>> with one specific 3D application? (StarCCM+, >>>>> I'm guessing?) >>>>> >>>>> DRC >>>>> >>>>> On 7/5/19 9:36 AM, Richard Ems wrote: >>>>>> Hi all, hi DRC, >>>>>> >>>>>> DRC, thanks for your great work! TurboVNC + >>>>>> VirtualGL + libjpeg-turbo work great! >>>>>> >>>>>> But now we are seeing what looks like a memory >>>>>> leak in Xvnc. >>>>>> On a RHEL7.6 Workstation, we were >>>>>> using turbovnc-2.2.1, VirtualGL-2.6.1 >>>>>> and libjpeg-turbo-official-2.0.2 up to one >>>>>> week ago. >>>>>> We were seeing Xvnc consuming lots of memory, >>>>>> so I checked and found "Fixed several minor >>>>>> memory leaks in the VirtualGL Faker." in the >>>>>> "Significant changes relative to 2.6.1" list >>>>>> for VirtualGL-2.6.2 . >>>>>> So I upgraded TurboVNC from 2.2.1 to 2.2.2 and >>>>>> VirtualGL from 2.6.1 to 2.6.2 and restarted >>>>>> all VNC sessions. >>>>>> >>>>>> The memory usage for Xvnc was low for some >>>>>> days, but now after one week running, one of >>>>>> the VNC sessions shows in "top" 601.4g >>>>>> virtual memory and 127.9g resident memory in use. >>>>>> That seems to be to much, or not? >>>>>> >>>>>> In /etc/turbovncserver.conf I've set "$useVGL >>>>>> = 1;" , may this be an issue? This setting has >>>>>> been there for months and we were not seeing >>>>>> this issue before. >>>>>> >>>>>> The Xvnc command line showed by "ps" is: >>>>>> >>>>>> # ps uw 364412 >>>>>> USER PID %CPU %MEM VSZ RSS TTY >>>>>> STAT START TIME COMMAND >>>>>> <USER> 364412 1.1 25.4 632004096 134290304 ? >>>>>> Sl Jun27 133:52 /opt/TurboVNC/bin/Xvnc :20 >>>>>> -desktop TurboVNC: serv1.bartech.local:20 >>>>>> (<USER>) -httpd /opt/TurboVNC/bin//../j ava >>>>>> -auth /home/<USER>/.Xauthority -geometry >>>>>> 1240x900 -depth 24 -rfbwait 120000 -rfbauth >>>>>> /home/<USER>/.vnc/passwd -x509cert >>>>>> /home/<USER>/.vnc/x509_cert.pem -x509key >>>>>> /home/<USER>/.vnc/x509_private.pem -rfbport >>>>>> 5920 -fp catalogue:/etc/X11/fontpath.d >>>>>> -deferupdate 1 -dridir /usr/lib64/dri >>>>>> -registrydir /usr/lib64/xorg >>>>>> >>>>>> This user is continuously running an >>>>>> application that gets started with >>>>>> "-clientldpreload libvglfaker.so" . >>>>>> >>>>>> What is going on here? >>>>>> >>>>>> Kind regards, >>>>>> Richard Ems -- You received this message because you are subscribed to the Google Groups "TurboVNC User Discussion/Support" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/turbovnc-users/17b40b3a-f195-d658-1d58-beb12dc2b9db%40virtualgl.org.
