On 18/12/2017 08:51, Yang Zhong wrote:
> On Mon, Dec 18, 2017 at 03:17:33PM +0800, Shannon Zhao wrote:
>>
>>
>> On 2017/12/12 14:54, Yang Zhong wrote:
> 2) what effect it has on boot time in Shannon's case.
>>> Hello Shannon,
>>>
>>> It's hard for me to reproduce your commands in my x86
On Mon, Dec 18, 2017 at 03:17:33PM +0800, Shannon Zhao wrote:
>
>
> On 2017/12/12 14:54, Yang Zhong wrote:
> >> > 2) what effect it has on boot time in Shannon's case.
> > Hello Shannon,
> >
> > It's hard for me to reproduce your commands in my x86 enviornment, as a
> > compare test,
> >
On 2017/12/12 14:54, Yang Zhong wrote:
>> > 2) what effect it has on boot time in Shannon's case.
> Hello Shannon,
>
> It's hard for me to reproduce your commands in my x86 enviornment, as a
> compare test,
> would you please help me use above two TEMP patches to verify VM bootup
> time
On 2017/12/12 14:54, Yang Zhong wrote:
>> 2) what effect it has on boot time in Shannon's case.
> Hello Shannon,
>
> It's hard for me to reproduce your commands in my x86 enviornment, as a
> compare test,
> would you please help me use above two TEMP patches to verify VM bootup
> time
On Mon, Dec 11, 2017 at 05:31:43PM +0100, Paolo Bonzini wrote:
> On 07/12/2017 16:06, Yang Zhong wrote:
> > Which show trim cost time less than 1ms and call_rcu_thread() do 10 times
> > batch free, the trim also 10 times.
> >
> > I also did below changes:
> > delta=1000, and
> >
On 07/12/2017 16:06, Yang Zhong wrote:
> Which show trim cost time less than 1ms and call_rcu_thread() do 10 times
> batch free, the trim also 10 times.
>
> I also did below changes:
> delta=1000, and
> next_trim_time = qemu_clock_get_ns(QEMU_CLOCK_HOST) + delta *
> last_trim_time
On Wed, Dec 06, 2017 at 10:48:45AM +0100, Paolo Bonzini wrote:
> On 06/12/2017 10:26, Yang Zhong wrote:
> > Hello Paolo,
> >
> > The best option is only trim one time after guest kernel bootup or VM
> > bootup, and as for
> > hotplug/unhotplug operations during the VM running, the trim
On Wed, Dec 06, 2017 at 10:48:45AM +0100, Paolo Bonzini wrote:
> On 06/12/2017 10:26, Yang Zhong wrote:
> > Hello Paolo,
> >
> > The best option is only trim one time after guest kernel bootup or VM
> > bootup, and as for
> > hotplug/unhotplug operations during the VM running, the trim
On 06/12/2017 10:26, Yang Zhong wrote:
> Hello Paolo,
>
> The best option is only trim one time after guest kernel bootup or VM
> bootup, and as for
> hotplug/unhotplug operations during the VM running, the trim still can do
> for each batch
> memory free because trim will not impact VM
On Tue, Dec 05, 2017 at 03:10:23PM +0100, Paolo Bonzini wrote:
> On 05/12/2017 07:00, Yang Zhong wrote:
> > On Mon, Dec 04, 2017 at 08:26:29PM +0800, Shannon Zhao wrote:
> >> Hi Yang,
> >>
> >> On 2017/12/4 20:03, Yang Zhong wrote:
> >>> On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini
On 05/12/2017 07:00, Yang Zhong wrote:
> On Mon, Dec 04, 2017 at 08:26:29PM +0800, Shannon Zhao wrote:
>> Hi Yang,
>>
>> On 2017/12/4 20:03, Yang Zhong wrote:
>>> On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
> On 01/12/2017 11:56, Yang Zhong wrote:
>>> This issue should
On Mon, Dec 04, 2017 at 08:26:29PM +0800, Shannon Zhao wrote:
> Hi Yang,
>
> On 2017/12/4 20:03, Yang Zhong wrote:
> > On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
> >> > On 01/12/2017 11:56, Yang Zhong wrote:
> >>> > > This issue should be caused by much times of system call
Hi Yang,
On 2017/12/4 20:03, Yang Zhong wrote:
> On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
>> > On 01/12/2017 11:56, Yang Zhong wrote:
>>> > > This issue should be caused by much times of system call by
>>> > > malloc_trim(),
>>> > > Shannon's test script include 60 scsi
On 04/12/2017 13:07, Daniel P. Berrange wrote:
> On Mon, Dec 04, 2017 at 08:03:22PM +0800, Yang Zhong wrote:
>> On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
>>> On 01/12/2017 11:56, Yang Zhong wrote:
This issue should be caused by much times of system call by
On Mon, Dec 04, 2017 at 12:07:05PM +, Daniel P. Berrange wrote:
> On Mon, Dec 04, 2017 at 08:03:22PM +0800, Yang Zhong wrote:
> > On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
> > > On 01/12/2017 11:56, Yang Zhong wrote:
> > > > This issue should be caused by much times of
On Mon, Dec 04, 2017 at 08:03:22PM +0800, Yang Zhong wrote:
> On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
> > On 01/12/2017 11:56, Yang Zhong wrote:
> > > This issue should be caused by much times of system call by
> > > malloc_trim(),
> > > Shannon's test script include 60
On Fri, Dec 01, 2017 at 01:52:49PM +0100, Paolo Bonzini wrote:
> On 01/12/2017 11:56, Yang Zhong wrote:
> > This issue should be caused by much times of system call by malloc_trim(),
> > Shannon's test script include 60 scsi disks and 31 ioh3420 devices. We
> > need
> > trade-off between
On 27/11/2017 04:06, Zhong Yang wrote:
> #test command
> ./qemu-system-x86_64 -enable-kvm -cpu host -m 2G -smp cpus=4,cores=4,\
>threads=1,sockets=1 -drive format=raw,\
>file=test.img,index=0,media=disk -nographic
>
> #without patch
>
On Sun, Nov 26, 2017 at 02:17:18PM +0800, Shannon Zhao wrote:
> Hi,
>
> On 2017/11/24 14:30, Yang Zhong wrote:
> > Since there are some issues in memory alloc/free machenism
> > in glibc for little chunk memory, if Qemu frequently
> > alloc/free little chunk memory, the glibc doesn't alloc
> >
Hi,
On 2017/11/24 14:30, Yang Zhong wrote:
> Since there are some issues in memory alloc/free machenism
> in glibc for little chunk memory, if Qemu frequently
> alloc/free little chunk memory, the glibc doesn't alloc
> little chunk memory from free list of glibc and still
> allocate from OS,
On Fri, Nov 24, 2017 at 02:30:30PM +0800, Yang Zhong wrote:
> diff --git a/configure b/configure
> index 0c6e757..6292ab0 100755
> --- a/configure
> +++ b/configure
> @@ -426,6 +426,7 @@ vxhs=""
> supported_cpu="no"
> supported_os="no"
> bogus_os="no"
> +malloc_trim="yes"
Looks pretty good,
Since there are some issues in memory alloc/free machenism
in glibc for little chunk memory, if Qemu frequently
alloc/free little chunk memory, the glibc doesn't alloc
little chunk memory from free list of glibc and still
allocate from OS, which make the heap size bigger and bigger.
This patch
22 matches
Mail list logo