For purely-graphical connections like VNC and RDP, no, OCR would really be the only option. Once the data leaves the VNC/RDP server, it's just a fragment of an image and has lost all text meaning. If you enable recording of keyboard events, you would be able to infer what is being typed, but the only way to read the graphical content of the screen would be OCR.
For connections driven by text like SSH, telnet, and Kubernetes, you can leverage Guacamole's support for typescripts. Each typescript is the raw text data received from the server prior to being rendered, including console codes, coupled with a separate file containing timing information. - Mike On Wed, Mar 25, 2020 at 1:52 PM Adrian Owen <[email protected]> wrote: > Mike, > > > > Is there a less convoluted way to grab the text displayed? > > > > *From:* Mike Jumper [mailto:[email protected]] > *Sent:* 25 March 2020 20:42 > *To:* [email protected] > *Subject:* Re: guacenc new parameters > > > > On Wed, Mar 25, 2020 at 8:58 AM Adrian Owen <[email protected]> wrote: > > I had an idea for another parameter to guacenc. > > > > gaucenc generates an M4V file. > > > > Could it optionally, generate PNG Snapshot images instead. Every second. > 1.file.png, 2.file.png …. > > > > Why? > > > > - Mike > > >
