[Kernel-packages] [Bug 1064521] Re: Kernel I/O scheduling writes starving reads, local DoS

2021-10-13 Thread Steve Langasek
The Precise Pangolin has reached end of life, so this bug will not be
fixed for that release

** Changed in: linux (Ubuntu Precise)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1064521

Title:
  Kernel I/O scheduling writes starving reads, local DoS

Status in linux package in Ubuntu:
  Triaged
Status in linux source package in Precise:
  Won't Fix
Status in linux source package in Quantal:
  Won't Fix

Bug description:
  On the Precise default kernel, it is possible by executing zcat on a
  large file for an unprivileged user to disrupt I/O sufficiently that
  it causes serious disruption.

  Serious disruption means (e.g.) a single MySQL update hangs for over
  120 seconds on the default scheduler (cfq), and between 1 and 11
  seconds on the deadline scheduler.

  This is reproducible on 2 sets of hardware using:

  root@extility-qa-test:~# uname -a
  Linux extility-qa-test 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 
UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 linux-image-3.2.0-29-generic 3.2.0-29.46
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_ratio 
  20
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_background_ratio 
  10

  No such problems occur on Lucid OS running the Oneiric Backports
  kernel.

  root@management-dev2:~# uname -a
  Linux management-dev2 3.0.0-15-server #26~lucid1-Ubuntu SMP Wed Jan 25 
15:55:45 UTC 2012 x86_64 GNU/Linux
  linux-image-3.0.0-15-server   3.0.0-15.26~lucid1  


  
  In order to replicate, download (e.g.) this gzipped Lucid image (note this is 
not the OS we are running, this is just an example of the a file that causes 
the problem):
   http://repo.flexiant.com/images/public/kvm/ubuntu10.04.img.gz
  and as un unprivileged user, on a default, untuned Precise install, do
zcat ubuntu10.04.img.gz > test

  
  Now in another window execute any trivial mysql update on any table. Note 
that this can take a hugely long time.

  "show full processlist" in mysql console will show the time taken
  executing the command.

  
  In kernel logs (with cfq) we see e.g.:

  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268048] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268144] "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268267] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268272]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268278]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268283]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268288] Call Trace:
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268298]  [] 
schedule+0x3f/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268303]  [] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268308]  [] 
? add_wait_queue+0x60/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268313]  [] 
ext4_sync_file+0x208/0x2d0
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268317]  [] 
? vfs_write+0x110/0x180
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268321]  [] 
do_fsync+0x56/0x80
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268325]  [] 
sys_fsync+0x10/0x20
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268329]  [] 
system_call_fastpath+0x16/0x1b
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268176] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268282] "echo 0 > 
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268393] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268399]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268405]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268410]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268415] Call Trace:
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268426]  [] 
schedule+0x3f/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268431]  [] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268436]  [] 
? add_wait_queue+0x60/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268441]  [] 
ext4_sync_file+0x208/0x2d0
  Oct  8 14:59:02 extility-qa-test kernel: [ 

[Kernel-packages] [Bug 1064521] Re: Kernel I/O scheduling writes starving reads, local DoS

2014-06-26 Thread Jamie Strandboge
** Changed in: linux (Ubuntu Quantal)
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1064521

Title:
  Kernel I/O scheduling writes starving reads, local DoS

Status in “linux” package in Ubuntu:
  Triaged
Status in “linux” source package in Precise:
  Triaged
Status in “linux” source package in Quantal:
  Won't Fix

Bug description:
  On the Precise default kernel, it is possible by executing zcat on a
  large file for an unprivileged user to disrupt I/O sufficiently that
  it causes serious disruption.

  Serious disruption means (e.g.) a single MySQL update hangs for over
  120 seconds on the default scheduler (cfq), and between 1 and 11
  seconds on the deadline scheduler.

  This is reproducible on 2 sets of hardware using:

  root@extility-qa-test:~# uname -a
  Linux extility-qa-test 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 
UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 linux-image-3.2.0-29-generic 3.2.0-29.46
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_ratio 
  20
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_background_ratio 
  10

  No such problems occur on Lucid OS running the Oneiric Backports
  kernel.

  root@management-dev2:~# uname -a
  Linux management-dev2 3.0.0-15-server #26~lucid1-Ubuntu SMP Wed Jan 25 
15:55:45 UTC 2012 x86_64 GNU/Linux
  linux-image-3.0.0-15-server   3.0.0-15.26~lucid1  


  
  In order to replicate, download (e.g.) this gzipped Lucid image (note this is 
not the OS we are running, this is just an example of the a file that causes 
the problem):
   http://repo.flexiant.com/images/public/kvm/ubuntu10.04.img.gz
  and as un unprivileged user, on a default, untuned Precise install, do
zcat ubuntu10.04.img.gz  test

  
  Now in another window execute any trivial mysql update on any table. Note 
that this can take a hugely long time.

  show full processlist in mysql console will show the time taken
  executing the command.

  
  In kernel logs (with cfq) we see e.g.:

  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268048] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268144] echo 0  
/proc/sys/kernel/hung_task_timeout_secs disables this message.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268267] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268272]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268278]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268283]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268288] Call Trace:
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268298]  [816579cf] 
schedule+0x3f/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268303]  [812650d5] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268308]  [8108aa50] 
? add_wait_queue+0x60/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268313]  [81211248] 
ext4_sync_file+0x208/0x2d0
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268317]  [81177ba0] 
? vfs_write+0x110/0x180
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268321]  [811a63a6] 
do_fsync+0x56/0x80
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268325]  [811a66d0] 
sys_fsync+0x10/0x20
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268329]  [81661ec2] 
system_call_fastpath+0x16/0x1b
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268176] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268282] echo 0  
/proc/sys/kernel/hung_task_timeout_secs disables this message.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268393] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268399]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268405]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268410]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268415] Call Trace:
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268426]  [816579cf] 
schedule+0x3f/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268431]  [812650d5] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268436]  [8108aa50] 
? add_wait_queue+0x60/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268441]  

[Kernel-packages] [Bug 1064521] Re: Kernel I/O scheduling writes starving reads, local DoS

2014-05-19 Thread Ivan Baldo
And now what happens with Ubuntu 14.04?
(yeah, too lazy myself to test, sorry)

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1064521

Title:
  Kernel I/O scheduling writes starving reads, local DoS

Status in “linux” package in Ubuntu:
  Triaged
Status in “linux” source package in Precise:
  Triaged
Status in “linux” source package in Quantal:
  Triaged

Bug description:
  On the Precise default kernel, it is possible by executing zcat on a
  large file for an unprivileged user to disrupt I/O sufficiently that
  it causes serious disruption.

  Serious disruption means (e.g.) a single MySQL update hangs for over
  120 seconds on the default scheduler (cfq), and between 1 and 11
  seconds on the deadline scheduler.

  This is reproducible on 2 sets of hardware using:

  root@extility-qa-test:~# uname -a
  Linux extility-qa-test 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 
UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 linux-image-3.2.0-29-generic 3.2.0-29.46
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_ratio 
  20
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_background_ratio 
  10

  No such problems occur on Lucid OS running the Oneiric Backports
  kernel.

  root@management-dev2:~# uname -a
  Linux management-dev2 3.0.0-15-server #26~lucid1-Ubuntu SMP Wed Jan 25 
15:55:45 UTC 2012 x86_64 GNU/Linux
  linux-image-3.0.0-15-server   3.0.0-15.26~lucid1  


  
  In order to replicate, download (e.g.) this gzipped Lucid image (note this is 
not the OS we are running, this is just an example of the a file that causes 
the problem):
   http://repo.flexiant.com/images/public/kvm/ubuntu10.04.img.gz
  and as un unprivileged user, on a default, untuned Precise install, do
zcat ubuntu10.04.img.gz  test

  
  Now in another window execute any trivial mysql update on any table. Note 
that this can take a hugely long time.

  show full processlist in mysql console will show the time taken
  executing the command.

  
  In kernel logs (with cfq) we see e.g.:

  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268048] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268144] echo 0  
/proc/sys/kernel/hung_task_timeout_secs disables this message.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268267] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268272]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268278]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268283]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268288] Call Trace:
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268298]  [816579cf] 
schedule+0x3f/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268303]  [812650d5] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268308]  [8108aa50] 
? add_wait_queue+0x60/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268313]  [81211248] 
ext4_sync_file+0x208/0x2d0
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268317]  [81177ba0] 
? vfs_write+0x110/0x180
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268321]  [811a63a6] 
do_fsync+0x56/0x80
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268325]  [811a66d0] 
sys_fsync+0x10/0x20
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268329]  [81661ec2] 
system_call_fastpath+0x16/0x1b
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268176] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268282] echo 0  
/proc/sys/kernel/hung_task_timeout_secs disables this message.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268393] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268399]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268405]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268410]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268415] Call Trace:
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268426]  [816579cf] 
schedule+0x3f/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268431]  [812650d5] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268436]  [8108aa50] 
? add_wait_queue+0x60/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [ 

[Kernel-packages] [Bug 1064521] Re: Kernel I/O scheduling writes starving reads, local DoS

2013-11-28 Thread Christopher M. Penalver
** Tags added: bios-outdated-2.6.1 needs-upstream-testing regression-
release

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1064521

Title:
  Kernel I/O scheduling writes starving reads, local DoS

Status in “linux” package in Ubuntu:
  Triaged
Status in “linux” source package in Precise:
  Triaged
Status in “linux” source package in Quantal:
  Triaged

Bug description:
  On the Precise default kernel, it is possible by executing zcat on a
  large file for an unprivileged user to disrupt I/O sufficiently that
  it causes serious disruption.

  Serious disruption means (e.g.) a single MySQL update hangs for over
  120 seconds on the default scheduler (cfq), and between 1 and 11
  seconds on the deadline scheduler.

  This is reproducible on 2 sets of hardware using:

  root@extility-qa-test:~# uname -a
  Linux extility-qa-test 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 
UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
 linux-image-3.2.0-29-generic 3.2.0-29.46
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_ratio 
  20
  root@extility-qa-test:~# cat /proc/sys/vm/dirty_background_ratio 
  10

  No such problems occur on Lucid OS running the Oneiric Backports
  kernel.

  root@management-dev2:~# uname -a
  Linux management-dev2 3.0.0-15-server #26~lucid1-Ubuntu SMP Wed Jan 25 
15:55:45 UTC 2012 x86_64 GNU/Linux
  linux-image-3.0.0-15-server   3.0.0-15.26~lucid1  


  
  In order to replicate, download (e.g.) this gzipped Lucid image (note this is 
not the OS we are running, this is just an example of the a file that causes 
the problem):
   http://repo.flexiant.com/images/public/kvm/ubuntu10.04.img.gz
  and as un unprivileged user, on a default, untuned Precise install, do
zcat ubuntu10.04.img.gz  test

  
  Now in another window execute any trivial mysql update on any table. Note 
that this can take a hugely long time.

  show full processlist in mysql console will show the time taken
  executing the command.

  
  In kernel logs (with cfq) we see e.g.:

  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268048] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268144] echo 0  
/proc/sys/kernel/hung_task_timeout_secs disables this message.
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268267] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268272]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268278]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268283]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268288] Call Trace:
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268298]  [816579cf] 
schedule+0x3f/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268303]  [812650d5] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268308]  [8108aa50] 
? add_wait_queue+0x60/0x60
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268313]  [81211248] 
ext4_sync_file+0x208/0x2d0
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268317]  [81177ba0] 
? vfs_write+0x110/0x180
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268321]  [811a63a6] 
do_fsync+0x56/0x80
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268325]  [811a66d0] 
sys_fsync+0x10/0x20
  Oct  8 14:57:02 extility-qa-test kernel: [ 3840.268329]  [81661ec2] 
system_call_fastpath+0x16/0x1b
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268176] INFO: task 
mysqld:1358 blocked for more than 120 seconds.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268282] echo 0  
/proc/sys/kernel/hung_task_timeout_secs disables this message.
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268393] mysqld  D 
81806200 0  1358  1 0x
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268399]  8801921fde48 
0082 8801921fde00 00030001
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268405]  8801921fdfd8 
8801921fdfd8 8801921fdfd8 00013780
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268410]  880195169700 
880191f79700 8801921fde58 8801912b2800
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268415] Call Trace:
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268426]  [816579cf] 
schedule+0x3f/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268431]  [812650d5] 
jbd2_log_wait_commit+0xb5/0x130
  Oct  8 14:59:02 extility-qa-test kernel: [ 3960.268436]  [8108aa50] 
? add_wait_queue+0x60/0x60
  Oct  8 14:59:02 extility-qa-test kernel: [