This bug was filed against a series that is no longer supported and so
is being marked as Won't Fix. If this issue still exists in a supported
series, please file a new bug.
This change has been made by an automated script, maintained by the
Ubuntu Kernel Team.
** Changed in: linux (Ubuntu)
This is a very serious problem. It's causing unpredictable lockups on
my server every 2-3 days, requiring a force-reboot. There are many
related reports from other users: e.g. #588046, #667656, #628530, and my
particular one, #684654.
This bug is not:
* Just on server kernels. My kernel is
Hi!
I vote for a much higher priority too! It's server related and that's
why really important.
This problem seems caused by high io load. Maybe related only to
filesystem io, maybe irq/sec or pure scsi io.
I'd be happy to test patches to debug an track this down.
Lars
--
You received this
My problem was a broken second disk in a raid 1 array. No problems
anymore!
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
Bad news.
After the last update it happened again.
I filed a new bug report: bug 652812
Lars
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a member of Ubuntu
Bugs,
If there are no server kernels to test, there should be enough
information for the developers to work towards resolving this issue. For
all reporters, it would be good to file individual bugs for each system
with issues. The bugs can be filed against linux as the package, using
ubuntu-bug linux.
Hi there!
News!
We had no failure since 20 days!
This seems to be because we reduced the filesystem usage from 99% down to 70%.
This is a XFS filesystem:
From:
FilesystemSize Used Avail Use% Mounted on
/dev/md3 6.1T 6.0T 65G 99% /backup1
/dev/md2 6.1T
Any news on this issue? I'm having the same error messages when using
tar. Two hours before I was on 8.04: no problems
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a
Hi,
the kernel update from last week has made our troubles worse.
I have not managed to get one backup without the 120s blocking since the update.
Althout I didn't change anything else. Still using noop as io-scheduler.
I'll open a new bug report when no one is responding here.
Sorry but this is
Hi!
The medium importance might be correct for desktops. Desktops commonly
start once per day.
But for servers this is most important, because they don't work as expected.
Is there someone working on this?
Where is the focus for ubuntu: servers or desktops?
I'd like to help. But someone has to
Hi,
bad news.
The backup stopped due to timeout.
There was no blocking for 120s but there still is some blocking.
The probability to happen seems reduced when using noop as io scheduler.
Is there someone tracking this down? I'd like to help to get this
debugged and resolved.
Best regards
Lars
Happening to me too. Several times on the 11th and 12th. May be
triggered by load from outside. Looks like a synflood after an incident
yesterday. Added mod_evasive, blocked a few bad bots, Reduced maxclients
to 50 (prefork). No events since last night.
https://www.bijk.com/p/2199b5ea shows
I suspect the probability to occur depends on the cpu load.
Here on my server is (nearly) no network traffic except two mostly idle ssh
sessions.
Lars
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this
Hi,
»noop« worked for the last two days now. It seems stable.
Lars
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Hi!
Last night the backup was interrupted again.
Our backup process timed out while waiting to be scheduled:
[71881.750043] INFO: task mbuffer:8637 blocked for more than 120 seconds.
[71881.750531] echo 0 /proc/sys/kernel/hung_task_timeout_secs disables this
message.
[71881.751090] mbuffer1
Hi everybody.
The test with cfq was successful too.
In our setup and with our purpose the noop was a bit faster. (8:19 hours
instead of 8:49 hours)
So with a server kernel don't use deadline if you get blocking
processes. This seems the solution.
Good luck.
Lars
--
Smbd,kjournald2 and rsync
In my case removing all LVM snapshots prevented the errors (and the
enormous load). I left the scheduler at deadline.
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a
Hi,
I have success with setting the block-io-scheduler to something different than
»deadline« for each block device in our raid arrays.
This is with kernel: 2.6.32-24-server
I tried »noop« and it worked fine for me. This is not reproduced yet,
because a backup lasts more than 12 hours each.
Lars,
by have success setting you mean avoid the freezes?
I'm also running 2.6.32-24-server (but with an adaptec hardware raid) and I
haven't had a task blocked for more than 120 seconds. message in my kern.log
since Jul 8.
However, I think the freezes are still happening just not for 120
Hi Tom,
yes I haven't had this »freezes« with noop.
With our setup it was reproducable. After some (3-4) hours of heavy io-
load from the hdds our process gets blocked for more than 120s and then
it stopps itself because it couldn't write to the tape drive any more.
With noop the backup (actual
Hi,
seems I have the same problem with a mbuffer process reading from a Soft-RAID6
from 24 SATA drives with XFS on it.
It's our backup system that writes everything (5.9TB) to tapes.
In the end the mbuffer stops with this error:
mbuffer: warning: error during output to /dev/st0: Input/output
@Okkulter,
I understand given your setup you may not be able to do this however, if
it were possible for you to test with the latest development kernel that
would be helpful.
ISO CD images are available from
http://cdimage.ubuntu.com/daily/current/ .
Also, if you could test the latest upstream
** Tags removed: kernel-candidate
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
** Tags added: kernel-candidate kernel-reviewed
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs
As suggested by Brian Rogers in bug #276476 I'm posting an update here.
Hope to attract some attention!
I'm running a file server that shares home directories and other directories
(~4TB, ext4) to 4 linux clients via nfs and about 20 windows and mac clients
via samba. Rsync runs several times
After some more investigations (due to a third crash, and my server
unvaillibility) I found something new in the log :
In the same time as all the crashes, I have in my apache2 access.log a
lot of concurrent access from the same IP (I think from a bad configured
corporate proxy) This may be the
I have (quiet) the same issue twice in 2 days (on karmic)
But I am not using ext4 (only ext3 and jfs)
and the first crash I had, it was trackerd the first blocking task :
(from kern.log)
Apr 19 11:28:43 localhost kernel: [156600.910099] INFO: task trackerd:3486
blocked for more than 120
I think It could be related to this ext4 bug on kernel.org..hopefully an
ext4 dev can shed more light on the issue?
https://bugzilla.kernel.org/show_bug.cgi?id=14830
** Bug watch added: Linux Kernel Bug Tracker #14830
http://bugzilla.kernel.org/show_bug.cgi?id=14830
--
Smbd,kjournald2 and
Hi,
Is this still alive? Should I post here instead:
https://bugs.launchpad.net/bugs/276476 ?
I've been getting the same errors (INFO: task xyz blocked for more than
120 seconds) since I upgraded to ext4 on my 4TB /home partition. It gets
a lot of I/O including rsync backups, smbd and nfs
hi there,
As a result of this error (behavior?) i switched back from ext4 to ext3 on my
14 TB softraid volume and no more blocking messages.
== error with ext4 - definitly
greetings to all who waits for a correction
I still will have a look on this
thx
--
Smbd,kjournald2 and rsync blocked
Bug 276476 states that we should file separate bugs for the blocked
processes (if I interpret the comments correctly). Can we instead dump
everything here?
The easiest way to reproduce this is just running rsync or (s)cp with a
big file. I see mainly blocked kvm, pdflush and kjournald in kern.log
** Changed in: linux (Ubuntu)
Assignee: (unassigned) = Surbhi Palande (csurbhi)
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a member of Ubuntu
Bugs, which is
@ Okkulter, can you run appport-collect to attach logfiles with this
bug. A complete output of dmesg when you hit the bug would be helpful.
Can you also try the latest kernel at: https://launchpad.net/~stefan-
bader-canonical/+archive/karmic/+packages to see if it helps ?
--
Smbd,kjournald2 and
** Changed in: linux (Ubuntu)
Importance: Undecided = Medium
** Changed in: linux (Ubuntu)
Status: New = Triaged
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you
** Tags added: karmic
--
Smbd,kjournald2 and rsync blocked for more than 120 seconds while using
ext4.
https://bugs.launchpad.net/bugs/494476
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
35 matches
Mail list logo