On 02/11/15 10:01, Richard Weinberger wrote:
> Am 02.11.2015 um 10:53 schrieb Anton Ivanov:
>> [snip]
>>
> I'm pretty sure that you don't see the issue as your Jessy userspace uses
> nanosleep periodically.
There are quite a few things running so this may indeed be the case.
Am 02.11.2015 um 15:30 schrieb Anton Ivanov:
> On 02/11/15 10:01, Richard Weinberger wrote:
>> Am 02.11.2015 um 10:53 schrieb Anton Ivanov:
>>> [snip]
>>>
>> I'm pretty sure that you don't see the issue as your Jessy userspace
>> uses nanosleep periodically.
> There are quite a few
I was testing under similar conditions (CPU pinning using taskset -c 0
on a multicore).
I have removed it and run some retests - I cannot reproduce the hang at
this point with my config
I am going to run a defconfig and compare the results to see if this
will give me any insights on the root
On 02/11/15 08:37, Richard Weinberger wrote:
> Hi!
>
> Am 02.11.2015 um 09:14 schrieb Anton Ivanov:
>> I was testing under similar conditions (CPU pinning using taskset -c 0 on a
>> multicore).
>>
>> I have removed it and run some retests - I cannot reproduce the hang at this
>> point with my
Hi!
Am 02.11.2015 um 09:14 schrieb Anton Ivanov:
> I was testing under similar conditions (CPU pinning using taskset -c 0 on a
> multicore).
>
> I have removed it and run some retests - I cannot reproduce the hang at this
> point with my config
>
> I am going to run a defconfig and compare
On 02/11/15 15:25, Richard Weinberger wrote:
> Am 02.11.2015 um 15:30 schrieb Anton Ivanov:
>> On 02/11/15 10:01, Richard Weinberger wrote:
>>> Am 02.11.2015 um 10:53 schrieb Anton Ivanov:
[snip]
>>> I'm pretty sure that you don't see the issue as your Jessy userspace
>>> uses
On 02/11/15 08:52, Richard Weinberger wrote:
> Am 02.11.2015 um 09:41 schrieb Anton Ivanov:
>> On 02/11/15 08:37, Richard Weinberger wrote:
>>> Hi!
>>>
>>> Am 02.11.2015 um 09:14 schrieb Anton Ivanov:
I was testing under similar conditions (CPU pinning using taskset -c 0 on
a
[snip]
>>> I'm pretty sure that you don't see the issue as your Jessy userspace uses
>>> nanosleep periodically.
>> There are quite a few things running so this may indeed be the case.
>>
>> What do you use for userspace (so I can try to reproduce this and debug it)?
> Debian Squeeze amd64 with
Am 02.11.2015 um 09:41 schrieb Anton Ivanov:
> On 02/11/15 08:37, Richard Weinberger wrote:
>> Hi!
>>
>> Am 02.11.2015 um 09:14 schrieb Anton Ivanov:
>>> I was testing under similar conditions (CPU pinning using taskset -c 0 on a
>>> multicore).
>>>
>>> I have removed it and run some retests - I
Am 02.11.2015 um 09:57 schrieb Anton Ivanov:
> On 02/11/15 08:52, Richard Weinberger wrote:
>> Am 02.11.2015 um 09:41 schrieb Anton Ivanov:
>>> On 02/11/15 08:37, Richard Weinberger wrote:
Hi!
Am 02.11.2015 um 09:14 schrieb Anton Ivanov:
> I was testing under similar conditions
On 02/11/15 10:59, Anton Ivanov wrote:
> On 02/11/15 10:01, Richard Weinberger wrote:
>> Am 02.11.2015 um 10:53 schrieb Anton Ivanov:
>>> [snip]
>>>
>> I'm pretty sure that you don't see the issue as your Jessy
>> userspace uses nanosleep periodically.
> There are quite a few things
On 02/11/15 10:01, Richard Weinberger wrote:
> Am 02.11.2015 um 10:53 schrieb Anton Ivanov:
>> [snip]
>>
> I'm pretty sure that you don't see the issue as your Jessy userspace uses
> nanosleep periodically.
There are quite a few things running so this may indeed be the case.
Am 31.10.2015 um 16:24 schrieb Thomas Meyer:
> Am Samstag, den 31.10.2015, 16:21 +0100 schrieb Richard Weinberger:
>> Am 31.10.2015 um 16:16 schrieb Thomas Meyer:
>>> mhh. strange. I didn't see this behaviour on my machine, but my
>>> machine
>>> is a rare single core system so, likely a race
Am 31.10.2015 um 17:22 schrieb Anton Ivanov:
> Richard, can you send me your config.
Sure. It is basically a defconfig.
> I have had it running for a couple of days before submission both under load
> and idle it was doing OK.
Well, what userspace did you try?
I did some more tests, if
Am 31.10.2015 um 16:44 schrieb Thomas Meyer:
> Am Samstag, den 31.10.2015, 16:30 +0100 schrieb Richard Weinberger:
>> Am 31.10.2015 um 16:24 schrieb Thomas Meyer:
>>> Am Samstag, den 31.10.2015, 16:21 +0100 schrieb Richard Weinberger:
Am 31.10.2015 um 16:16 schrieb Thomas Meyer:
> mhh.
On 31/10/15 19:08, Richard Weinberger wrote:
> Am 31.10.2015 um 17:22 schrieb Anton Ivanov:
>> Richard, can you send me your config.
> Sure. It is basically a defconfig.
I am dragging an old config from some work I did a while back. I will
scrap it and do a defconfig build one I get back home (I
On Thu, Oct 29, 2015 at 7:23 AM, Anton Ivanov
wrote:
> I got the first patchset to build, it works very well on a single core
> host or with CPU pinning of the UML - the performance gain is > 25%.
>
> However, I introduced a race somewhere along the way - it
Am Samstag, den 31.10.2015, 16:13 +0100 schrieb Richard Weinberger:
> Am 31.10.2015 um 16:10 schrieb Thomas Meyer:
> > Am Samstag, den 31.10.2015, 14:54 +0100 schrieb Richard Weinberger:
> > > On Thu, Oct 29, 2015 at 7:23 AM, Anton Ivanov
> > > wrote:
> > > > I got
Am Samstag, den 31.10.2015, 16:21 +0100 schrieb Richard Weinberger:
> Am 31.10.2015 um 16:16 schrieb Thomas Meyer:
> > mhh. strange. I didn't see this behaviour on my machine, but my
> > machine
> > is a rare single core system so, likely a race condition while
> > relaying
> > the timer interrupt
Am Samstag, den 31.10.2015, 16:30 +0100 schrieb Richard Weinberger:
> Am 31.10.2015 um 16:24 schrieb Thomas Meyer:
> > Am Samstag, den 31.10.2015, 16:21 +0100 schrieb Richard Weinberger:
> > > Am 31.10.2015 um 16:16 schrieb Thomas Meyer:
> > > > mhh. strange. I didn't see this behaviour on my
Am 31.10.2015 um 16:10 schrieb Thomas Meyer:
> Am Samstag, den 31.10.2015, 14:54 +0100 schrieb Richard Weinberger:
>> On Thu, Oct 29, 2015 at 7:23 AM, Anton Ivanov
>> wrote:
>>> I got the first patchset to build, it works very well on a single
>>> core
>>> host or
Am 31.10.2015 um 16:16 schrieb Thomas Meyer:
> mhh. strange. I didn't see this behaviour on my machine, but my machine
> is a rare single core system so, likely a race condition while relaying
> the timer interrupt to the userspace process.
Here I can trigger it by starting UML, logging in and
Am Samstag, den 31.10.2015, 14:54 +0100 schrieb Richard Weinberger:
> On Thu, Oct 29, 2015 at 7:23 AM, Anton Ivanov
> wrote:
> > I got the first patchset to build, it works very well on a single
> > core
> > host or with CPU pinning of the UML - the performance
Richard, can you send me your config.
I have had it running for a couple of days before submission both under
load and idle it was doing OK.
A
On 31/10/15 15:44, Thomas Meyer wrote:
> Am Samstag, den 31.10.2015, 16:30 +0100 schrieb Richard Weinberger:
>> Am 31.10.2015 um 16:24 schrieb Thomas
I got the first patchset to build, it works very well on a single core
host or with CPU pinning of the UML - the performance gain is > 25%.
However, I introduced a race somewhere along the way - it crashes UML
reliably if you do not pin CPUs.
I will debug it, fix it and submit. I am guessing a
Hi!
Am 25.10.2015 um 19:46 schrieb Anton Ivanov:
> Hi List, hi Richard,
>
> I am going to sort out the UBD patchset next as that has no dependencies on
> timer and the other stuff which is waiting in the queue behind the timers.
>
> That should give UML a significant boost. It is nothing
Hi List, hi Richard,
I am going to sort out the UBD patchset next as that has no dependencies
on timer and the other stuff which is waiting in the queue behind the
timers.
That should give UML a significant boost. It is nothing particularly
revolutionary (qemu has been using some of that for
Background: UML is using an obsolete itimer call for
all timers and "polls" for kernel space timer firing
in its userspace portion resulting in a long list
of bugs and incorrect behaviour(s). It also uses
ITIMER_VIRTUAL for its timer which results in the
timer being dependent on it running and the
28 matches
Mail list logo