On Tue, Nov 23, 2010 at 09:23:41PM +0800, lidong chen wrote:
> At this point, I'd suggest testing vhost-net on the upstream kernel,
> not on rhel kernels. The change that introduced per-device threads is:
> c23f3445e68e1db0e74099f264bc5ff5d55ebdeb
> i will try this tomorrow.
> 
> Is CONFIG_SCHED_DEBUG set?
> yes. CONFIG_SCHED_DEBUG=y.

Disable it. Either debug scheduler or perf-test it :)

> 2010/11/23 Michael S. Tsirkin <m...@redhat.com>:
> > On Tue, Nov 23, 2010 at 10:13:43AM +0800, lidong chen wrote:
> >> I test the performance between per-vhost kthread disable and enable.
> >>
> >> Test method:
> >> Send the same traffic load between per-vhost kthread disable and
> >> enable, and compare the cpu rate of host os.
> >> I run five vm on kvm, each of them have five nic.
> >> the vhost version which per-vhost kthread disable we used is rhel6
> >> beta 2(2.6.32.60).
> >> the vhost version which per-vhost kthread enable we used is rhel6 
> >> (2.6.32-71).
> >
> > At this point, I'd suggest testing vhost-net on the upstream kernel,
> > not on rhel kernels. The change that introduced per-device threads is:
> > c23f3445e68e1db0e74099f264bc5ff5d55ebdeb
> >
> >> Test result:
> >> with per-vhost kthread disable, the cpu rate of host os is 110%.
> >> with per-vhost kthread enable, the cpu rate of host os is 130%.
> >
> > Is CONFIG_SCHED_DEBUG set? We are stressing the scheduler a lot with
> > vhost-net.
> >
> >> In 2.6.32.60,the whole system only have a kthread.
> >> [r...@rhel6-kvm1 ~]# ps -ef | grep vhost
> >> root       973     2  0 Nov22 ?        00:00:00 [vhost]
> >>
> >> In 2.6.32.71,the whole system have 25 kthread.
> >> [r...@kvm-4slot ~]# ps -ef | grep vhost-
> >> root     12896     2  0 10:26 ?        00:00:00 [vhost-12842]
> >> root     12897     2  0 10:26 ?        00:00:00 [vhost-12842]
> >> root     12898     2  0 10:26 ?        00:00:00 [vhost-12842]
> >> root     12899     2  0 10:26 ?        00:00:00 [vhost-12842]
> >> root     12900     2  0 10:26 ?        00:00:00 [vhost-12842]
> >>
> >> root     13022     2  0 10:26 ?        00:00:00 [vhost-12981]
> >> root     13023     2  0 10:26 ?        00:00:00 [vhost-12981]
> >> root     13024     2  0 10:26 ?        00:00:00 [vhost-12981]
> >> root     13025     2  0 10:26 ?        00:00:00 [vhost-12981]
> >> root     13026     2  0 10:26 ?        00:00:00 [vhost-12981]
> >>
> >> root     13146     2  0 10:26 ?        00:00:00 [vhost-13088]
> >> root     13147     2  0 10:26 ?        00:00:00 [vhost-13088]
> >> root     13148     2  0 10:26 ?        00:00:00 [vhost-13088]
> >> root     13149     2  0 10:26 ?        00:00:00 [vhost-13088]
> >> root     13150     2  0 10:26 ?        00:00:00 [vhost-13088]
> >> ...
> >>
> >> Code difference:
> >> In 2.6.32.60,in function vhost_init, create the kthread for vhost.
> >> vhost_workqueue = create_singlethread_workqueue("vhost");
> >>
> >> In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for
> >> each nic interface.
> >> dev->wq = create_singlethread_workqueue(vhost_name);
> >>
> >> Conclusion:
> >> with per-vhost kthread enable, the system can more throughput.
> >> but deal the same traffic load with per-vhost kthread enable, it waste
> >> more cpu resource.
> >>
> >> In my application scene, the cpu resource is more important, and one
> >> kthread for deal with traffic load is enough.
> >>
> >> So i think we should add a param to control this.
> >> for the CPU-bound system, this param disable per-vhost kthread.
> >> for the I/O-bound system, this param enable per-vhost kthread.
> >> the default value of this param is enable.
> >>
> >> If my opinion is right, i will give a patch for this.
> >
> > Let's try to figure out what the issue is, first.
> >
> > --
> > MST
> >
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to