On Tue, Mar 6, 2018 at 8:55 PM, Amarjeet Singh <amarjee...@gmail.com> wrote:

> Hi Mike,
>
>
When creating a new thread, please address the list as a whole. The idea
behind the mailing lists is to facilitate help/discussion within the entire
community. Calling out an individual at the beginning defeats that.

Now most of my feature related issues are known and most of them are fix, I
> moved to performance testing.
> ...
> When I run a single session with Guacamole and open MSN and leave it open,
> I can see that the CPU utilization on my server fluctuates between 3% to
> 30%. Average is 5-6%, but sometimes it would hit 30% also.
>
>
Instantaneous sampling of CPU usage is an extremely poor metric for gauging
overall scalability, particularly for something as subjective as remote
desktop performance. Load average is a better metric, but still not as good
as an actual load test. To perform such a load test, you would need to
actually connect multiple, independent machines (each with their own
browser instance) and use those machines to interact with separately-hosted
remote desktops, gauging subjective performance as load increases.

I actually have done exactly the above by:

1) Creating simulated but realistic load by scripting a remote desktop
session using image recognition software (Sikuli - http://www.sikuli.org/)
2) Automating the deployment of such simulated users by creating an image
in EC2
3) Gradually increasing the size of the simulated load from tens to hundreds
4) As load increases, relying on actual humans (who are connected to the
same guac server) to continuously use their remote desktops and report when
performance appears degraded

Based on these tests, we found that a typical server should be fine so long
as roughly 1 CPU core and 2 GB of memory are available for every 25
concurrent users at peak. Subjective performance of any particular
individual's remote desktop should not degrade until that level of overall
load is exceeded, and even then such degradation is gradual.

You should also be sure to modify Tomcat's server.xml to specify the "NIO"
connector. Some configurations use the blocking I/O connector by default,
which works fine but has issues scaling for large numbers of long-lived
connections like those typical of a Guacamole deployment.


> I have to test this server to supprot around 50 simultaneous users.
>

Based on the above load tests, you would need roughly 2 CPU cores and 4 GB
of memory to support that load at peak for normal remote desktop use. If
you are virtualizing things, this will also depend on how well-allocated
your server resources are.

Although users are not going to do Youtube but with 5% of average CPU
> consumption for MSN.com for a single user, I dont see I could support more
> than 20 sessions.
>
>
This is not how things work in practice. The CPU consumption of guacd is
very bursty, its built-in optimizer continuously tracks and adjusts for
response/processing times, and the OS kernel does a very good job of
scheduling tasks given load. Even assuming that 5% CPU usage were required
on average, the question is not "what is 100% divided by 5%" but "how much
less than 5% produces a subjective difference".

- Mike

Reply via email to