On 10/10/15 12:40, Zhang Haoyu wrote:
> On 10/10/15 11:35, Zefan Li wrote:
>> On 2015/10/9 18:29, Zhang Haoyu wrote:
>>> I started multiple docker containers in centos6.6(linux-2.6.32-504.16.2),
>>> and there's one bad program was running in one container.
>>&
On 10/10/15 11:35, Zefan Li wrote:
> On 2015/10/9 18:29, Zhang Haoyu wrote:
>> I started multiple docker containers in centos6.6(linux-2.6.32-504.16.2),
>> and there's one bad program was running in one container.
>> This program produced many child threads continuously
is still there, I'm not sure.
IMO, we should isolate the pid accounting and pid_max between pid namespaces,
and make them per pidns.
Below post had request for making pid_max per pidns.
http://thread.gmane.org/gmane.linux.kernel/1108167/focus=210
Thanks,
Zhang Haoyu
--
To unsubscribe from
On 10/10/15 11:35, Zefan Li wrote:
> On 2015/10/9 18:29, Zhang Haoyu wrote:
>> I started multiple docker containers in centos6.6(linux-2.6.32-504.16.2),
>> and there's one bad program was running in one container.
>> This program produced many child threads continuously
On 10/10/15 12:40, Zhang Haoyu wrote:
> On 10/10/15 11:35, Zefan Li wrote:
>> On 2015/10/9 18:29, Zhang Haoyu wrote:
>>> I started multiple docker containers in centos6.6(linux-2.6.32-504.16.2),
>>> and there's one bad program was running in one container.
>>&
is still there, I'm not sure.
IMO, we should isolate the pid accounting and pid_max between pid namespaces,
and make them per pidns.
Below post had request for making pid_max per pidns.
http://thread.gmane.org/gmane.linux.kernel/1108167/focus=210
Thanks,
Zhang Haoyu
--
To unsubscribe from
>Hi all,
>On Thu, Nov 27, 2014 at 03:20:43PM +0800, Zhang Haoyu wrote:
>>>>>>>> I tested win-server-2008 with "-cpu
>>>>>>>> core2duo,hv_spinlocks=0x,hv_relaxed,hv_time",
>>>>>>>> this problem still happe
Hi all,
On Thu, Nov 27, 2014 at 03:20:43PM +0800, Zhang Haoyu wrote:
I tested win-server-2008 with -cpu
core2duo,hv_spinlocks=0x,hv_relaxed,hv_time,
this problem still happened, about 200,000 vmexits per-second,
bringing very bad experience, just like being stuck.
Please upload a full
* 1/3)
>vhost threads server all VMs" and "vhost: add polling mode",
>now I get the patch
>"http://thread.gmane.org/gmane.comp.emulators.kvm.devel/88682/focus=88723;
>posted by Shirley,
>any update to this patch?
>
>And, I want to make a bit change on this
;
Hi, Razya, Shirley
I am going to test the combination of
"several (depends on total number of cpu on host, e.g., total_number * 1/3)
vhost threads server all VMs" and "vhost: add polling mode",
now I get the patch
"http://thread.gmane.org/gmane.comp.emulators.kvm.devel/
;
posted by Shirley,
any update to this patch?
And, I want to make a bit change on this patch, create total_cpu_number *
1/N(N={3,4}) vhost threads instead of per-cpu vhost thread to server all VMs,
any ideas?
Thanks,
Zhang Haoyu
+static int poll_start_rate = 0;
+module_param(poll_start_rate
, I think per-cpu vhost thread is too many.
any ideas?
Thanks,
Zhang Haoyu
+static int poll_start_rate = 0;
+module_param(poll_start_rate, int, S_IRUGO|S_IWUSR);
+MODULE_PARM_DESC(poll_start_rate, Start continuous polling of
virtqueue when rate of events is at least this number per jiffy
12 matches
Mail list logo