On Wed, Aug 17, 2016 at 12:05:22PM +0300, Jouni Malinen wrote:
> I had not realized this previously due to the test case passing, but the
> same retransmit SYN case was happening with older kernels, it just was
> done a tiny bit faster to escape that 1.0 second timeout limit.. That
> about 1.03
On Wed, Aug 17, 2016 at 12:05:22PM +0300, Jouni Malinen wrote:
> I had not realized this previously due to the test case passing, but the
> same retransmit SYN case was happening with older kernels, it just was
> done a tiny bit faster to escape that 1.0 second timeout limit.. That
> about 1.03
On Thu, Aug 11, 2016 at 06:21:26PM +0300, Jouni Malinen wrote:
> The test code looked like this in python:
>
> addr = (url.hostname, url.port)
> socks = {}
> for i in range(20):
> socks[i] = socket.socket(socket.AF_INET, socket.SOCK_STREAM,
>
On Thu, Aug 11, 2016 at 06:21:26PM +0300, Jouni Malinen wrote:
> The test code looked like this in python:
>
> addr = (url.hostname, url.port)
> socks = {}
> for i in range(20):
> socks[i] = socket.socket(socket.AF_INET, socket.SOCK_STREAM,
>
Jon,
On Mon, Jul 25, 2016 at 03:56:48PM +0100, Jon Hunter wrote:
> > When tearing down, call timers_dead_cpu before notify_dead.
> > There is a hidden dependency between:
> >
> > - timers
> > - Block multiqueue
> > - rcutree
> >
> > If timers_dead_cpu() comes later than
Jon,
On Mon, Jul 25, 2016 at 03:56:48PM +0100, Jon Hunter wrote:
> > When tearing down, call timers_dead_cpu before notify_dead.
> > There is a hidden dependency between:
> >
> > - timers
> > - Block multiqueue
> > - rcutree
> >
> > If timers_dead_cpu() comes later than
Jon,
On Tue, Jul 26, 2016 at 10:20:58AM +0100, Jon Hunter wrote:
> Thanks. I have not tried another ARM based device, but I would be
> curious if another ARM device sees this or not.
I do see this stall on socfpga and on zynq, but in both cases the
suspend mechanism is flakey in other ways, too.
Jon,
On Tue, Jul 26, 2016 at 10:20:58AM +0100, Jon Hunter wrote:
> Thanks. I have not tried another ARM based device, but I would be
> curious if another ARM device sees this or not.
I do see this stall on socfpga and on zynq, but in both cases the
suspend mechanism is flakey in other ways, too.
On Mon, Jul 25, 2016 at 05:35:43PM +0200, rcoch...@linutronix.de wrote:
> I see if I can find a tegra system to test with...
I tried the tip:smp/hotplug branch under kvm x86_64, and I didn't see
any problems with suspend or hibernate, even with CONFIG_PREEMPT_NONE.
I'll see if I can get my hands
On Mon, Jul 25, 2016 at 05:35:43PM +0200, rcoch...@linutronix.de wrote:
> I see if I can find a tegra system to test with...
I tried the tip:smp/hotplug branch under kvm x86_64, and I didn't see
any problems with suspend or hibernate, even with CONFIG_PREEMPT_NONE.
I'll see if I can get my hands
On Mon, Jul 25, 2016 at 03:56:48PM +0100, Jon Hunter wrote:
> > There is a hidden dependency between:
> >
> > - timers
> > - Block multiqueue
> > - rcutree
> >
> > If timers_dead_cpu() comes later than blk_mq_queue_reinit_notify()
> > that latter function causes a RCU stall.
>
> After this change
On Mon, Jul 25, 2016 at 03:56:48PM +0100, Jon Hunter wrote:
> > There is a hidden dependency between:
> >
> > - timers
> > - Block multiqueue
> > - rcutree
> >
> > If timers_dead_cpu() comes later than blk_mq_queue_reinit_notify()
> > that latter function causes a RCU stall.
>
> After this change
On Tue, Apr 05, 2016 at 01:36:38PM +0200, Heiko Carstens wrote:
> On Tue, Apr 05, 2016 at 01:23:36PM +0200, Heiko Carstens wrote:
> > Subsequently, in this case, the setup_pmc_cpu() call will be executed on
> > the wrong cpu.
>
> .. or to illustrate this behaviour: the following patch (white
On Tue, Apr 05, 2016 at 01:36:38PM +0200, Heiko Carstens wrote:
> On Tue, Apr 05, 2016 at 01:23:36PM +0200, Heiko Carstens wrote:
> > Subsequently, in this case, the setup_pmc_cpu() call will be executed on
> > the wrong cpu.
>
> .. or to illustrate this behaviour: the following patch (white
On Tue, Apr 05, 2016 at 05:53:47AM +, Brown, Len wrote:
> > On Tue, Apr 05, 2016 at 04:20:47AM +, Brown, Len wrote:
> > > No, I do not believe that cpuidle should bother
> > > supporting changing idle drivers at run-time.
> >
> > Huh? But you just said, "it would be good to be able to
On Tue, Apr 05, 2016 at 05:53:47AM +, Brown, Len wrote:
> > On Tue, Apr 05, 2016 at 04:20:47AM +, Brown, Len wrote:
> > > No, I do not believe that cpuidle should bother
> > > supporting changing idle drivers at run-time.
> >
> > Huh? But you just said, "it would be good to be able to
On Tue, Apr 05, 2016 at 04:20:47AM +, Brown, Len wrote:
> The first idle driver to register with cpuidle wins.
>
> intel_idle should always get the opportunity
> to probe and register before acpi_idle (processor_idle.c)
>
> When intel_idle was allowed to be modular,
> some distros build
On Tue, Apr 05, 2016 at 04:20:47AM +, Brown, Len wrote:
> The first idle driver to register with cpuidle wins.
>
> intel_idle should always get the opportunity
> to probe and register before acpi_idle (processor_idle.c)
>
> When intel_idle was allowed to be modular,
> some distros build
On Mon, Apr 04, 2016 at 03:55:35PM -0400, Paul Gortmaker wrote:
> > This was done in commit 6ce9cd8669fa1195fdc21643370e34523c7ac988
> > ("intel_idle: disable module support") since "...the module capability
> > is cauing more trouble than it is worth."
The reason given in that commit was that
On Mon, Apr 04, 2016 at 03:55:35PM -0400, Paul Gortmaker wrote:
> > This was done in commit 6ce9cd8669fa1195fdc21643370e34523c7ac988
> > ("intel_idle: disable module support") since "...the module capability
> > is cauing more trouble than it is worth."
The reason given in that commit was that
20 matches
Mail list logo