Hello! On Tue, Jan 09, 2018 at 07:08:41PM +0000, debayang.qdt wrote:
> I had this observation while benchmarking nginx 48 workers with > wrk on two separate back to back high speed connected systems > (arm) with several random files being accessed from the client. > As you rightly mentioned this may not have impacted performance > any real time workload in any significant way - as has been > observed during benchmarking. > However if it's easy to avoid shared memory contention - it may > make sense to avoid it - as it might have a negative impact on > some platforms under peak loads. The other part of the problem is that if you can easily avoid some code, it make sense to avoid it, as any code has maintanance costs. And the same applies to memory usage - if you can avoid using more memory, you should. As there are various embedded devices where memory is quite limited. Current nginx approach is to use 128 bytes for each variable to avoid cache invalidation on modifications of unrelated variables. Yet we haven't seen valid reasons to extend this to something more complex. > Also in the code the counter slot size was kept to 128 with a comment like - > keep equal to or more than CL size. > Does it make sense to keep it to ngx_cacheline_size rather than hardcoding it > to a largest CL size ? I don't think there is a big difference in terms of memory usage, in both cases it's huge if you allocate a 128 or ngx_cacheline_size for each ngx_processes slot. On the other hand, using ngx_cacheline_size might result in problems if ngx_cacheline_size will be somehow different in different processess using the same shared memory segment. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx-devel mailing list [email protected] http://mailman.nginx.org/mailman/listinfo/nginx-devel
