Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
Hello again! I know it's been ages, but I finally got some time to get that patch tested out and try additional debugging. On Sep 01, 2011, at 11:17, Jan Kara wrote: On Tue 30-08-11 19:26:22, Moffett, Kyle D wrote: On Aug 30, 2011, at 18:12, Jan Kara wrote: I can still trigger it on my VM snapshot very easily, so if you have anything you think I should test I would be very happy to give it a shot. OK, so in the meantime I found a bug in data=journal code which could be related to your problem. It is fixed by commit 2d859db3e4a82a365572592d57624a5f996ed0ec which is in 3.1-rc1. Have you tried that or newer kernel as well? If the problem still is not fixed, I can provide some debugging patch to you. We spoke with Josef Bacik how errors like yours could happen so I have some places to watch... I have not tried anything more recent; I'm actually a bit reluctant to move away from the Debian squeeze official kernels since I do need the security updates. I took a quick look and I can't find that function in 2.6.32, so I assume it would be a rather nontrivial back-port. It looks like the relevant code used to be in ext4_clear_inode somewhere? It's not that hard - untested patch attached. So this applied mostly cleanly (with one minor context-only conflict in the 2.6.32.17 patch), unfortunately it didn't resolve the problem. Just as a sanity check, I upgraded to the Debian 3.1.0-1-amd64 kernel, based on kernel version 3.1.1 and the problem still occurs there too (additional info at the end of the email). Looking at the issue again, I don't think it has anything to do with file deletion at all. Specifically, there are a grand total of 4 files in that filesystem (alongside an empty lost+found directory): master.lock prng_exch smtpd_scache.db smtp_scache.db As far as I can tell, none of those is ever deleted during normal operation. The crash occurs very quickly after starting postfix. It connects to the external email server (using TLS) and begins to flush queued mail. At that point, the tlsmgr daemon tries to update the smtp_scache.db file, which is a Berkeley DB about 40k in size. Somewhere in there, the Berkeley DB does an fdatasync(). The fdatasync() apparently triggers the bad behavior from the jbd2 thread, which then oopses in fs/jbd2/commit.c:485 (which appears to be the same same BUG_ON() as before). The stack looks something like this: jbd_journal_commit_transaction+0x4ea/0x1053 [jbd2] kjournald2+0xc0/0x20a [jbd2] add_wait_queue+0x3c/0x3c commit_timeout+0x5/0x5 [jbd2] kthread+0x76/0x7e Cheers, Kyle Moffett -- Curious about my work on the Debian powerpcspe port? I'm keeping a blog here: http://pureperl.blogspot.com/ -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/4df71ae2-b51f-4d05-a15c-eee1df009...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Aug 30, 2011, at 18:12, Jan Kara wrote: On Fri 26-08-11 16:03:32, Moffett, Kyle D wrote: Ping? Any more ideas for debugging this issue? Sorry for not getting to you earlier. That's ok, I have a workaround so it's been on my back burner for a while. I can still trigger it on my VM snapshot very easily, so if you have anything you think I should test I would be very happy to give it a shot. OK, so in the meantime I found a bug in data=journal code which could be related to your problem. It is fixed by commit 2d859db3e4a82a365572592d57624a5f996ed0ec which is in 3.1-rc1. Have you tried that or newer kernel as well? If the problem still is not fixed, I can provide some debugging patch to you. We spoke with Josef Bacik how errors like yours could happen so I have some places to watch... I have not tried anything more recent; I'm actually a bit reluctant to move away from the Debian squeeze official kernels since I do need the security updates. I took a quick look and I can't find that function in 2.6.32, so I assume it would be a rather nontrivial back-port. It looks like the relevant code used to be in ext4_clear_inode somewhere? Out of curiosity, what would happen in data=journal mode if you unlinked a file which still had buffers pending? That case does not seem to be handled by that commit you mentioned, was it already handled elsewhere? Thanks again! Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/f33b5c17-2893-44d4-bbe4-c75cdc137...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
Ping? Any more ideas for debugging this issue? I can still trigger it on my VM snapshot very easily, so if you have anything you think I should test I would be very happy to give it a shot. On Jun 24, 2011, at 16:51, Kyle Moffett wrote: On Jun 24, 2011, at 16:02, Jan Kara wrote: On Fri 24-06-11 11:03:52, Moffett, Kyle D wrote: On Jun 24, 2011, at 09:46, Jan Kara wrote: On Thu 23-06-11 16:19:08, Moffett, Kyle D wrote: Besides which, line 534 in the Debian 2.6.32 kernel I am using is this one: J_ASSERT(commit_transaction-t_nr_buffers = commit_transaction-t_outstanding_credits); The trouble is that the problem is likely in some journal list shuffling code because if just some operation wrongly estimated the number of needed buffers, we'd fail the assertion in jbd2_journal_dirty_metadata(): J_ASSERT_JH(jh, handle-h_buffer_credits 0); Hmm, ok... I'm also going to turn that failing J_ASSERT() into a WARN_ON() just to see how much further it gets. I have an easy script to recreate this data volume even if it gets totally hosed anyways, so... OK, we'll see what happens. Ok, status update here: I applied a modified version of your patch that prints out the values of both t_outstanding_credits and t_nr_buffers when the assertion triggers. I replaced the J_ASSERT() that was failing with the exact same WARN_ON() trigger too. The end result is that postfix successfully finished delivering all the emails. Afterwards I unmounted both filesystems and ran fsck -fy on them, it reported no errors at all. Looking through the log, the filesystem with the issues is the 32MB one mounted on /var/lib/postfix: total 61 drwxr-x--- 3 postfix postfix 1024 Jun 16 21:02 . drwxr-xr-x 46 rootroot 4096 Jun 20 17:19 .. d- 2 rootroot12288 Jun 16 18:35 lost+found -rw--- 1 postfix postfix33 Jun 24 16:34 master.lock -rw--- 1 postfix postfix 1024 Jun 24 16:44 prng_exch -rw--- 1 postfix postfix 2048 Jun 24 16:34 smtpd_scache.db -rw--- 1 postfix postfix 41984 Jun 24 16:36 smtp_scache.db In particular, it's the tlsmgr program accessing the smtp_scache file when it dies. Full log below. Cheers, Kyle Moffett Jun 24 16:36:05 i-38020f57 kernel: [5369326.385234] transaction-t_outstanding_credits = 8 Jun 24 16:36:05 i-38020f57 kernel: [5369326.385247] transaction-t_nr_buffers = 9 Jun 24 16:36:05 i-38020f57 kernel: [5369326.385251] [ cut here ] Jun 24 16:36:05 i-38020f57 kernel: [5369326.385278] WARNING: at /tmp/kdm-deb-kernel/linux-2.6-2.6.32/debian/build/source_amd64_xen/fs/jbd2/transaction.c:1329 jbd2_journal_stop+0x189/0x25d [jbd2]() Jun 24 16:36:05 i-38020f57 kernel: [5369326.385287] Modules linked in: ip6table_filter ip6_tables act_police cls_flow cls_fw cls_u32 sch_htb sch_hfsc sch_ingress sch_sfq xt_time xt_connlimit xt_realm iptable_raw xt_comment xt_recent xt_policy ipt_ULOG ipt_REJECT ipt_REDIRECT ipt_NETMAP ipt_MASQUERADE ipt_ECN ipt_ecn ipt_CLUSTERIP ipt_ah ipt_addrtype nf_nat_tftp nf_nat_snmp_basic nf_nat_sip nf_nat_pptp nf_nat_proto_gre nf_nat_irc nf_nat_h323 nf_nat_ftp nf_nat_amanda ts_kmp nf_conntrack_amanda nf_conntrack_sane nf_conntrack_tftp nf_conntrack_sip nf_conntrack_proto_sctp nf_conntrack_pptp nf_conntrack_proto_gre nf_conntrack_netlink nf_conntrack_netbios_ns nf_conntrack_irc nf_conntrack_h323 nf_conntrack_ftp xt_TPROXY nf_tproxy_core xt_tcpmss xt_pkttype xt_physdev xt_owner xt_NFQUEUE xt_NFLOG nfnetlink_log xt_multiport xt_MARK xt_mark xt_mac xt_limit xt_length xt_iprange xt_helper xt_hashlimit xt_DSCP xt_dscp xt_dccp xt_conntrack xt_CONNMARK xt_connmark xt_CLASSIFY ipt_LOG xt_tcpudp xt_state iptable_nat nf_nat nf_conntrac Jun 24 16:36:05 i-38020f57 kernel: k_ipv4 nf_defrag_ipv4 nf_conntrack iptable_mangle nfnetlink iptable_filter ip_tables x_tables ext3 jbd loop snd_pcm snd_timer snd soundcore snd_page_alloc pcspkr evdev ext4 mbcache jbd2 crc16 dm_mod xen_netfront xen_blkfront Jun 24 16:36:05 i-38020f57 kernel: [5369326.385440] Pid: 3817, comm: tlsmgr Not tainted 2.6.32-5-xen-amd64 #1 Jun 24 16:36:05 i-38020f57 kernel: [5369326.385445] Call Trace: Jun 24 16:36:05 i-38020f57 kernel: [5369326.385458] [a0032c81] ? jbd2_journal_stop+0x189/0x25d [jbd2] Jun 24 16:36:05 i-38020f57 kernel: [5369326.385467] [a0032c81] ? jbd2_journal_stop+0x189/0x25d [jbd2] Jun 24 16:36:05 i-38020f57 kernel: [5369326.385477] [8104ef00] ? warn_slowpath_common+0x77/0xa3 Jun 24 16:36:05 i-38020f57 kernel: [5369326.385486] [a0032c81] ? jbd2_journal_stop+0x189/0x25d [jbd2] Jun 24 16:36:05 i-38020f57 kernel: [5369326.385505] [a0074c8e] ? __ext4_journal_stop+0x63/0x69 [ext4] Jun 24 16:36:05 i-38020f57 kernel: [5369326.385517] [a0060949] ? ext4_journalled_write_end+0x160/0x19a [ext4] Jun 24 16:36:05 i-38020f57 kernel: [5369326.385633] [a00857c6] ? ext4_xattr_get+0x1fa
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Jun 28, 2011, at 10:16, Ted Ts'o wrote: My basic impression is that the use of data=journalled can help reduce the risk (slightly) of serious corruption to some kinds of databases when the application does not provide appropriate syncs or journalling on its own (IE: such as text-based Wiki database files). Yes, although if the application has index files that have to be updated at the same time, there is no guarantee that the changes that survive after a system failure (either a crash or a power fail), unless the application is doing proper application-level journalling or some other structured. Manually rebuilding application indexes and clearing out caches is fine; with a badly written application I'd have to do that anyways. I just want to reduce the risk that I actually corrupt data, and it sounds like that's what data-journalling will help with. To sum up, the only additional guarantee data=journal offers against data=ordered is a total ordering of all IO operations. That is, if you do a sequence of data and metadata operations, then you are guaranteed that after a crash you will see the filesystem in a state corresponding exactly to your sequence terminated at some (arbitrary) point. Data writes are disassembled into page-sized page-aligned sequence of writes for purpose of this model... data=journal can also make the fsync() operation faster, since it will involver fewer seeks (although it will require a greater write bandwidth). Depending on the write bandwidth, you really need to benchmark things to be sure, though. Hm, so this would actually be very beneficial for a mail spool directory then, because mail servers are supposed to fsync each email received in order to guarantee that it will not be lost before it acknowledges receipt to the SMTP client. Thanks again! Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/28cb08e7-2850-498a-a8eb-a9b872d31...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
This is really helpful to me, but it's deviated a bit from solving the original bug. Based on the last log that I generated showing that the error occurs in journal_stop(), what else should I be testing? Further discussion of the exact behavior of data-journalling below: On Jun 28, 2011, at 05:36, Jan Kara wrote: On Mon 27-06-11 23:21:17, Moffett, Kyle D wrote: On Jun 27, 2011, at 12:01, Ted Ts'o wrote: That being siad, it is true that data=journalled isn't necessarily faster. For heavy disk-bound workloads, it can be slower. So I can imagine adding some documentation that warns people not to use data=journal unless they really know what they are doing, but at least personally, I'm a bit reluctant to dispense with a bug report like this by saying, oh, that feature should be deprecated. I suppose I should chime in here, since I'm the one who (potentially incorrectly) thinks I should be using data=journalled mode. Please correct me if this is horribly horribly wrong: [...] no journal: Nothing is journalled + Very fast. + Works well for filesystems that are mkfsed on every boot - Have to fsck after every reboot Fsck is needed only after a crash / hard powerdown. Otherwise completely correct. Plus you always have a possibility of exposing uninitialized (potentially sensitive) data after a fsck. Yes, sorry, I dropped the word hard from hard reboot while editing... oops. Actually, normal desktop might be quite happy with non-journaled filesystem when fsck is fask enough. No, because fsck can occasionally fail on a non-journalled filesystem, and then the Joe user is sitting there staring at an unhappy console prompt with a lot of cryptic error messages. It's also very bad for any kind of embedded or server environment that might need to come back up headless. data=ordered: Data appended to a file will be written before the metadata extending the length of the file is written, and in certain cases the data will be written before file renames (partial ordering), but the data itself is unjournalled, and may be only partially complete for updates. + Does not write data to the media twice + A crash or power failure will not leave old uninitialized data in files. - Data writes to files may only partially complete in the event of a crash. No problems for logfiles, or self-journalled application databases, but others may experience partial writes in the event of a crash and need recovery. Correct, one should also note that noone guarantees order in which data hits the disk - i.e. when you do write(f,a); write(f,b); and these are overwrites it may happen that b is written while a is not. Yes, right, I should have mentioned that too. If a program wants data-level ordering then it must issue an fsync() or fdatasync(). Just to confirm, an file write in data=ordered mode can be only partially written during a hard shutdown: char a[512] = aaa...; char b[512] = bbb...; write(fd, a, 512); fsync(fd); write(fd, b, 512); == Hard poweroff here fsync(fd); The data on disk could contain any mix of bs and as, and possibly even garbage data depending on the operation of the disk firmware, correct? data=journalled: Data and metadata are both journalled, meaning that a given data write will either complete or it will never occur, although the precise ordering is not guaranteed. This also implies all of the data=metadata guarantees of data=ordered. + Direct IO data writes are effectively atomic, resulting in less likelihood of data loss for application databases which do not do their own journalling. This means that a power failure or system crash will not result in a partially-complete write. Well, direct IO is atomic in data=journal the same way as in data=ordered. It can happen only half of direct IO write is done when you hit power button at the right moment - note this holds for overwrites. Extending writes or writes to holes are all-or-nothing for ext4 (again both in data=journal and data=ordered mode). My impression of journalled data was that a single-sector write would be written checksummed into the journal and then later into the actual filesystem, so it would either complete (IE: journal entry checksum is OK and it gets replayed after a crash) or it would not (IE: journal entry does not checksum and therefore the later write never happened and the entry is not replayed). Where is my mental model wrong? - Cached writes are not atomic + For small cached file writes (of only a few filesystem pages) there is a good chance that kernel writeback will queue the entire write as a single I/O and it will be protected as a result. This helps reduce the chance of serious damage to some text-based database files (such as those for some Wikis), but is obviously not a guarantee. Page sized and page aligned writes are atomic (in both data=journal
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Jun 28, 2011, at 18:57, Jan Kara wrote: On Tue 28-06-11 14:30:55, Moffett, Kyle D wrote: On Jun 28, 2011, at 05:36, Jan Kara wrote: Well, direct IO is atomic in data=journal the same way as in data=ordered. It can happen only half of direct IO write is done when you hit power button at the right moment - note this holds for overwrites. Extending writes or writes to holes are all-or-nothing for ext4 (again both in data=journal and data=ordered mode). My impression of journalled data was that a single-sector write would be written checksummed into the journal and then later into the actual filesystem, so it would either complete (IE: journal entry checksum is OK and it gets replayed after a crash) or it would not (IE: journal entry does not checksum and therefore the later write never happened and the entry is not replayed). Umm, right. This is true. That's another guarantee of data=journal mode I didn't think of. Ok, that's what I had hoped was the case. That doesn't help much for overwrites of variable-length data (EG: text files), but it does help protect stuff like MySQL MyISAM (which does not do journalling). It's probably unnecessary for MySQL InnoDB, which *does* have its own journal. Page sized and page aligned writes are atomic (in both data=journal and data=ordered modes). When a write spans multiple pages, there are chances the writes will be merged in a single transaction but no guarantees as you properly write. I don't know that our definitions of atomic write are quite the same... I'm assuming that filesystem atomic write means that even if the disk itself does not guarantee that a single write will either complete or it will be discarded, then the filesystem will provide that guarantee. OK. There are different levels of disk does not guarantee atomic writes though. E.g. flash disks don't guarantee atomic writes but even more they actually corrupt unrelated blocks on power failure so any filesystem is actually screwed on power failure. For standard rotating drives I'd rely on the drive being able to write a full fs block (4k) although I agree noone really guarantees this. Well, I've seen a study somewhere that some spinning media actually *can* tend to corrupt a nearby sector or two during a power failure, depending on exactly what the input voltage does. The better ones certainly have a voltage monitor that automatically cuts power to the heads when it goes below a critical level. And the better Flash-based media actually *do* provide atomic write guarantees due to the wear-levelling and flash-remapping engine. In order to protect their mapping table metadata and avoid very large write amplification they will use a system similar to a log-structured filesystem to accumulate a bunch of small random writes into one larger write. Since they're always writing into empty space and then doing an atomic metadata update, their writes are always effectively atomic, even for data. My informal testing of the Intel X-18M drives seems to indicate that they work that way. Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/b5285968-90f7-4a0e-ab92-0179598e4...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Jun 27, 2011, at 12:01, Ted Ts'o wrote: On Mon, Jun 27, 2011 at 05:30:11PM +0200, Lukas Czerner wrote: I've found some. So although data=journal users are minority, there are some. That being said I agree with you we should do something about it - either state that we want to fully support data=journal - and then we should really do better with testing it or deprecate it and remove it (which would save us some complications in the code). I would be slightly in favor of removing it (code simplicity, less options to configure for admin, less options to test for us, some users I've come across actually were not quite sure why they are using it - they just thought it looks safer). Hmm... FYI, I hope to be able to bring on line automated testing for ext4 later this summer (there's a testing person at Google is has signed up to work on setting this up as his 20% project). The test matrix that I have him included data=journal, so we will be getting better testing in the near future. At least historically, data=journalling was the *simpler* case, and was the first thing supported by ext4. (data=ordered required revoke handling which didn't land for six months or so). So I'm not really that convinced that removing really buys us that much code simplification. That being siad, it is true that data=journalled isn't necessarily faster. For heavy disk-bound workloads, it can be slower. So I can imagine adding some documentation that warns people not to use data=journal unless they really know what they are doing, but at least personally, I'm a bit reluctant to dispense with a bug report like this by saying, oh, that feature should be deprecated. I suppose I should chime in here, since I'm the one who (potentially incorrectly) thinks I should be using data=journalled mode. My basic impression is that the use of data=journalled can help reduce the risk (slightly) of serious corruption to some kinds of databases when the application does not provide appropriate syncs or journalling on its own (IE: such as text-based Wiki database files). Please correct me if this is horribly horribly wrong: no journal: Nothing is journalled + Very fast. + Works well for filesystems that are mkfsed on every boot - Have to fsck after every reboot data=writeback: Metadata is journalled, data (to allocated extents) may be written before or after the metadata is updated with a new file size. + Fast (not as fast as unjournalled) + No need to fsck after a hard power-down - A crash or power failure in the middle of a write could leave old data on disk at the end of a file. If security labeling such as SELinux is enabled, this could contaminate a file with data from a deleted file that was at a higher sensitivity. Log files (including binary database replication logs) may be effectively corrupted as a result. data=ordered: Data appended to a file will be written before the metadata extending the length of the file is written, and in certain cases the data will be written before file renames (partial ordering), but the data itself is unjournalled, and may be only partially complete for updates. + Does not write data to the media twice + A crash or power failure will not leave old uninitialized data in files. - Data writes to files may only partially complete in the event of a crash. No problems for logfiles, or self-journalled application databases, but others may experience partial writes in the event of a crash and need recovery. data=journalled: Data and metadata are both journalled, meaning that a given data write will either complete or it will never occur, although the precise ordering is not guaranteed. This also implies all of the data=metadata guarantees of data=ordered. + Direct IO data writes are effectively atomic, resulting in less likelihood of data loss for application databases which do not do their own journalling. This means that a power failure or system crash will not result in a partially-complete write. - Cached writes are not atomic + For small cached file writes (of only a few filesystem pages) there is a good chance that kernel writeback will queue the entire write as a single I/O and it will be protected as a result. This helps reduce the chance of serious damage to some text-based database files (such as those for some Wikis), but is obviously not a guarantee. - This writes all data to the block device twice (once to the FS journal and once to the data blocks). This may be especially bad for write-limited Flash-backed devices. Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/2d8d1a30-c092-4163-b47a-bcedace53...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Jun 24, 2011, at 09:46, Jan Kara wrote: On Thu 23-06-11 16:19:08, Moffett, Kyle D wrote: Besides which, line 534 in the Debian 2.6.32 kernel I am using is this one: J_ASSERT(commit_transaction-t_nr_buffers = commit_transaction-t_outstanding_credits); Hmm, OK, so we've used more metadata buffers than we told JBD2 to reserve. I suppose you are not using data=journal mode and the filesystem was created as ext4 (i.e. not converted from ext3), right? Are you using quotas? The filesystem *is* using data=journal mode. If I switch to data=ordered or data=writeback, the problem goes away. The filesystems were created as ext4 using the e2fstools in Debian squeeze: 1.41.12, and the kernel package is 2.6.32-5-xen-amd64 (2.6.32-34squeeze1). The exact commands I used to create the Postfix filesystems were: lvcreate -L 5G -n postfix dbnew lvcreate -L 32M -n smtpdbnew mke2fs -t ext4 -L db:postfix /dev/dbnew/postfix mke2fs -t ext4 -L db:smtp/dev/dbnew/smtp tune2fs -i 0 -c 1 -e remount-ro -o acl,user_xattr,journal_data /dev/dbnew/postfix tune2fs -i 0 -c 1 -e remount-ro -o acl,user_xattr,journal_data /dev/dbnew/smtp Then my fstab has: /dev/mapper/dbnew-postfix /var/spool/postfix ext4 noauto,noatime,nosuid,nodev 0 2 /dev/mapper/dbnew-smtp/var/lib/postfix ext4 noauto,noatime,nosuid,nodev 0 2 I don't even think I have the quota tools installed on this system; there are certainly none configured. If somebody can tell me what information would help to debug this I'd be more than happy to throw a whole bunch of debug printks under that error condition and try to trigger the crash with that. Alternatively I could remove that J_ASSERT() and instead add some debug further down around the commit_transaction-t_outstanding_credits--; to try to see exactly what IO it's handling when it runs out of credits. The trouble is that the problem is likely in some journal list shuffling code because if just some operation wrongly estimated the number of needed buffers, we'd fail the assertion in jbd2_journal_dirty_metadata(): J_ASSERT_JH(jh, handle-h_buffer_credits 0); Hmm, ok... I'm also going to turn that failing J_ASSERT() into a WARN_ON() just to see how much further it gets. I have an easy script to recreate this data volume even if it gets totally hosed anyways, so... The patch below might catch the problem closer to the place where it happens... Also possibly you can try current kernel whether the bug happens with it or not. I'm definitely going to try this patch, but I'll also see what I can do about trying a more recent kernel. Thanks! Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/2f80bf45-28fa-46d3-9a28-ca9416dc5...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
Hello again everyone, I'm in the middle of doing some software testing on a pre-production clone of this system using some modified software configurations and a testing-only data volume, and I've managed to trigger this panic again. The trigger was exactly the same; I had a bunch of queued emails from logcheck because my TLS configuration was wrong, then I fixed the TLS configuration and typed postqueue -f to send the queued mail. Ted, since this new iteration has no customer data, passwords, keys, or any other private data, I'm going to try to get approval to release an exact EC2 image of this system for you to test with, including the fake data volume that I triggered the problem on. If not I can certainly reproduce it now by stopping email delivery and generating a lot of fake syslog spam; I can try applying kernel patches and report what happens. Hopefully you're still willing to help out tracking down the problem? Thanks again! Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/af621113-8320-4973-a88a-1fc048ea4...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Jun 23, 2011, at 16:55, Sean Ryle wrote: Maybe I am wrong here, but shouldn't the cast be to (unsigned long) or to (sector_t)? Line 534 of commit.c: jbd_debug(4, JBD: got buffer %llu (%p)\n, (unsigned long long)bh-b_blocknr, bh-b_data); No, that printk() is fine, the format string says %llu so the cast is unsigned long long. Besides which, line 534 in the Debian 2.6.32 kernel I am using is this one: J_ASSERT(commit_transaction-t_nr_buffers = commit_transaction-t_outstanding_credits); If somebody can tell me what information would help to debug this I'd be more than happy to throw a whole bunch of debug printks under that error condition and try to trigger the crash with that. Alternatively I could remove that J_ASSERT() and instead add some debug further down around the commit_transaction-t_outstanding_credits--; to try to see exactly what IO it's handling when it runs out of credits. Any ideas? Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/404fd5cc-8f27-4336-b7d4-10675c53a...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Apr 04, 2011, at 20:15, Ted Ts'o wrote: On Mon, Apr 04, 2011 at 09:24:28AM -0500, Moffett, Kyle D wrote: Unfortunately it was not a trivial process to install Debian squeeze onto an EC2 instance; it took a couple ugly Perl scripts, a patched Debian-Installer, and several manual post-install-but-before-reboot steps (like fixing up GRUB 0.99). One of these days I may get time to update all that to the official wheezy release and submit bug reports. Sigh, I was whoping someone was maintaining semi-official EC2 images for Debian, much like alestic has been maintaining for Ubuntu. (Hmm, actually, he has EC2 images for Lenny and Etch, but unfortunately not for squeeze. Sigh) The Alestic EC2 images (now replaced by official Ubuntu images) use kernel images formed as AKIs, which means users can't upload their own. Prior to a couple of Ubuntu staff getting special permission to upload kernel images, all the Alestic EC2 images just borrowed RedHat or Fedora kernels and copied over the modules. The big problem for Squeeze is that it uses new udev which is not compatible with those older kernels. For the Debian-Installer and my Debian images, I use the PV-GRUB AKI to load a kernel image from my rootfs. Specifically, one of the Perl scripts builds an S3-based AMI containing a Debian-Installer kernel and initramfs (using a tweaked and preseeded D-I build). It uploads the AMI to my account and registers it with EC2. Then another Perl script starts the uploaded AMI and attaches one or more EBS volumes for the Debian-Instalelr to use. When you've completed the install it takes EBS snapshots and creates an EBS-backed AMI from those. The scripts use an odd mix of the Net::Amazon::EC2 CPAN module and shell callouts to the ec2 tools, but they seem to work well enough. I'm actually using the official Debian Xen kernels for both the install process and the operational system, but the regular pv_ops kernels (without extra Xen patches) work fine too. The only bug I found so far was a known workaround for old buggy hypervisors: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=592428 That one is fixed in the official squeeze release. It's probably easier for me to halt email delivery and clone the working instance and try to reproduce from there. If I recall, the (easily undone) workaround was to remount from data=journal to data=ordered on a couple filesystems. It may take a day or two to get this done, though. Couple of questions which might give me some clues: (a) was this a natively formatted ext4 file system, or a ext3 file system which was later converted to ext4? All the filesystems were formatted like this using Debian e2fstools as of 9 months ago: mke2fs -t ext4 -E lazy_itable_init=1 -L db:mail /dev/mapper/db-mail tune2fs -i 0 -c 1 -e remount-ro -o acl,user_xattr,journal_data /dev/mapper/db-mail Ooooh could the lazy_itable_init have anything to do with it? (b) How big are the files/directories involved? In particular, how big is the Postfix mail queue directory, and it is an extent-based directory? (what does lsattr on the mail queue directory report) Ok, there's a couple relatively small filesystems: /var/spool/postfix (20971520 sectors, 728K used right now) /var/lib/postfix (262144 sectors, 26K used right now) /var/mail (8380416 sectors, 340K used right now) As far as I can tell, everything in each filesystem is using extents (at least I assume that's what this means from lsattr -R): -e- . -e- ./corrupt -e- ./deferred [...] The /var/spool/postfix is the Postfix chroot as per the default Debian configuration. I should also mention that the EC2 hypervisor does not seem to support barriers or flushes. PV-GRUB complains about that very early during the boot process. (c) As far as file sizes, does it matter how big the e-mail messages are, and are there any other database files that postgress might be touching at the time that you get the OOPS? I assume you mean postfix instead of postgres here. I'm not entirely sure because I can't reproduce the OOPS anymore, but there does not seem to be anything in the Postfix directories other than the individual spooled-mail files (one per email), some libraries, some PID files, some UNIX-domain sockets, and a couple of static files in etc/, so I would assume not. I'm pretty sure that it is /var/spool/postfix that was crashing. The emails that were triggering the issue were between 4k and 120k, but no more than 100-120 stuck emails total. The SSL session cache files are stored in /var/lib/postfix, which as I said above is an entirely separate filesystem. I have found a bug in ext4 where we were underestimating how many journal credits were needed when modifying direct/indirect-mapped files (which would be seen on ext4 if you had a ext3 file system that was converted to start using extents
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernelBUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Apr 05, 2011, at 15:07, Ted Ts'o wrote: On Tue, Apr 05, 2011 at 10:30:11AM -0500, Moffett, Kyle D wrote: Well, the base image is essentially a somewhat basic Debian squeeze for EC2 with our SSH public keys and a couple generic customizations applied. It does not have Postfix installed or configured, so there would be some work involved. Well, if you can share that image in AWS with the ssh keys stripped out it would save me a bunch of time. I assume it's not setup to automatically set ssh keys and pass them back to AWS like the generic images can? Well, the generic images just download to ~root/.ssh/authorized_keys from http://169.254.169.254/meta-data/public-keys/0/openssh-key They don't really generate a keypair themselves, that's just what AWS does and provides the pubkey via that URL. The 169.254.169.254 is just the link-local address for some virtualization infrastructure software. These days they even let you upload your own pubkey. Unfortunately our images don't do that download, although the patched D-I image does do that when initially setting up the network console. If you send me an SSH key and AWS account number in private email then I will cook an updated image and share it out to you. I also didn't see any problems with the system at all until the queue got backed up with ~100-120 stuck emails. After Postfix tried and failed to deliver a bunch of emails I would get the OOPS. Yeah, what I'd probably try to do is install postfix and then send a few hundred messages to foo...@example.com and see if I can repro the OOPS. Not sure if it's related, but the particular SMTP error that was causing things to back up in the first place was by the remote server rejecting emails via SMTP errors after a *successful* connection. Thanks for investigating! Thanks for debugging! Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/26ae8923-4dea-43ff-8f79-1d5aa665a...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Apr 02, 2011, at 22:02, Ted Ts'o wrote: Sorry for not following up sooner. Are you still able to reproduce this failure? If I set up an identical Debian stable instance on EC-2, am I likely to reproduce it myself? Do you have a package list or EC2 base image I can use as a starting point? I'll need to check on this. Unfortunately it was not a trivial process to install Debian squeeze onto an EC2 instance; it took a couple ugly Perl scripts, a patched Debian-Installer, and several manual post-install-but-before-reboot steps (like fixing up GRUB 0.99). One of these days I may get time to update all that to the official wheezy release and submit bug reports. I have an exact image of the failing instance, but it has proprietary data on it and if I stand up an old copy I need to be careful not to actually let it send all the queued emails :-D. It's probably easier for me to halt email delivery and clone the working instance and try to reproduce from there. If I recall, the (easily undone) workaround was to remount from data=journal to data=ordered on a couple filesystems. It may take a day or two to get this done, though. If it comes down to it I also have a base image (from squeeze as of 9 months ago) that could be made public after updating with new SSH keys. Thanks again! Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/15e8241a-37a0-4438-849e-a157a376c...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
On Apr 04, 2011, at 10:24, Moffett, Kyle D wrote: On Apr 02, 2011, at 22:02, Ted Ts'o wrote: Sorry for not following up sooner. Are you still able to reproduce this failure? If I set up an identical Debian stable instance on EC-2, am I likely to reproduce it myself? Do you have a package list or EC2 base image I can use as a starting point? I'll need to check on this. Unfortunately it was not a trivial process to install Debian squeeze onto an EC2 instance; it took a couple ugly Perl scripts, a patched Debian-Installer, and several manual post-install-but-before-reboot steps (like fixing up GRUB 0.99). One of these days I may get time to update all that to the official wheezy release and submit bug reports. I have an exact image of the failing instance, but it has proprietary data on it and if I stand up an old copy I need to be careful not to actually let it send all the queued emails :-D. It's probably easier for me to halt email delivery and clone the working instance and try to reproduce from there. If I recall, the (easily undone) workaround was to remount from data=journal to data=ordered on a couple filesystems. It may take a day or two to get this done, though. If it comes down to it I also have a base image (from squeeze as of 9 months ago) that could be made public after updating with new SSH keys. Bah... I went back to the old image that was crashing every boot before and I can't find any way to make it crash at all now... If I manage to reproduce it again later I'll send you another email. Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/10d1c3aa-79a9-4766-ba4d-6de82...@boeing.com
Bug#615998: linux-image-2.6.32-5-xen-amd64: Repeatable kernel BUG at fs/jbd2/commit.c:534 from Postfix on ext4
Whoops, looks like the Debian bug-tracker lost the CC list somehow. I believe I've got all the CCs re-added, sorry for any duplicate emails. On Mar 01, 2011, at 11:52, Kyle Moffett wrote: Package: linux-2.6 Version: 2.6.32-30 Severity: important I'm getting a repeatable BUG from ext4, which seems to be caused by Postfix processing its mail queue. The specific filesystem block device that has the problem seems to be dm-13, which on this boot is the logical volume containing the /var/spool/postfix chroot. This is a completely standard Debian installation running on an Amazon EC2 instance (x86_64). The filesystem is mounted in data=journal mode. This crash is *very* repeatable. It occurs almost every reboot when there are more than 1 or 2 queued emails. I will try re-mounting the filesystem in data=ordered mode momentarily. The relevant filesystems are: /dev/mapper/system-root / ext4 rw,noatime,barrier=1,data=ordered 0 0 /dev/mapper/system-var /var ext4 rw,noatime,barrier=1,nodelalloc,data=journal 0 0 /dev/mapper/system-log /var/log ext4 rw,nosuid,nodev,noatime,barrier=1,data=ordered 0 0 /dev/xvda1 /boot ext3 rw,noatime,user_xattr,acl,data=journal 0 0 /dev/mapper/db-mail /var/mail ext4 rw,nosuid,nodev,noatime,barrier=1,data=ordered 0 0 /dev/mapper/db-postfix /var/spool/postfix ext4 rw,nosuid,nodev,noatime,barrier=1,nodelalloc,data=journal 0 0 /dev/mapper/db-smtp /var/lib/postfix ext4 rw,nosuid,nodev,noatime,barrier=1,nodelalloc,data=journal 0 0 /dev/mapper/db-smtpcfg /etc/postfix ext4 rw,nosuid,nodev,noatime,barrier=1,nodelalloc,data=journal 0 0 In particular, I note that there was a previous report of a BUG at fs/jbd2/commit.c:533 which never seemed to get isolated: http://www.kerneltrap.com/mailarchive/linux-ext4/2009/9/2/6373283 I need to get this system operational again right now, but I'm going to take a consistent snapshot of it so I can debug it later. NOTE: For followers on the linux-ext4 mailing list, this particular system is running the Debian squeeze kernel (based on 2.6.32), so it's theoretically possible this bug has been fixed upstream since then. I didn't have any luck finding such a fix on Google, though. -- Package-specific info: ** Version: Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-30) (b...@decadent.org.uk) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Wed Jan 12 05:46:49 UTC 2011 ** Command line: root=/dev/mapper/system-root ro ** Tainted: D (128) * Kernel has oopsed before. ** Kernel log: [ 118.525038] alloc irq_desc for 526 on node -1 [ 118.525040] alloc kstat_irqs on node -1 [ 118.700415] device-mapper: uevent: version 1.0.3 [ 118.700890] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: dm-de...@redhat.com [ 118.925563] EXT4-fs (dm-0): INFO: recovery required on readonly filesystem [ 118.925580] EXT4-fs (dm-0): write access will be enabled during recovery [ 118.968700] EXT4-fs (dm-0): orphan cleanup on readonly fs [ 118.968716] EXT4-fs (dm-0): ext4_orphan_cleanup: deleting unreferenced inode 790044 [ 118.968761] EXT4-fs (dm-0): ext4_orphan_cleanup: deleting unreferenced inode 790012 [ 118.968768] EXT4-fs (dm-0): ext4_orphan_cleanup: deleting unreferenced inode 790011 [ 118.968775] EXT4-fs (dm-0): ext4_orphan_cleanup: deleting unreferenced inode 790010 [ 118.968782] EXT4-fs (dm-0): ext4_orphan_cleanup: deleting unreferenced inode 790009 [ 118.968788] EXT4-fs (dm-0): 5 orphan inodes deleted [ 118.968794] EXT4-fs (dm-0): recovery complete [ 118.979150] EXT4-fs (dm-0): mounted filesystem with ordered data mode [ 119.293543] udev[204]: starting version 164 [ 119.366778] input: PC Speaker as /devices/platform/pcspkr/input/input1 [ 119.436417] Error: Driver 'pcspkr' is already registered, aborting... [ 124.153241] Adding 4194296k swap on /dev/xvdb1. Priority:-1 extents:1 across:4194296k SS [ 125.156599] loop: module loaded [ 138.650657] EXT4-fs (dm-21): Ignoring delalloc option - requested data journaling mode [ 138.650959] EXT4-fs (dm-21): mounted filesystem with journalled data mode [ 138.660092] EXT4-fs (dm-22): mounted filesystem with ordered data mode [ 138.674436] kjournald starting. Commit interval 5 seconds [ 138.675145] EXT3 FS on xvda1, internal journal [ 138.675155] EXT3-fs: mounted filesystem with journal data mode. [ 138.728462] EXT4-fs (xvdc): mounted filesystem without journal [ 138.745406] EXT4-fs (dm-17): mounted filesystem with ordered data mode [ 138.748531] EXT4-fs (dm-18): mounted filesystem with ordered data mode [ 138.774667] EXT4-fs (dm-19): mounted filesystem with ordered data mode [ 138.780834] EXT4-fs (dm-2): Ignoring delalloc option - requested data journaling mode [ 138.781400] EXT4-fs (dm-2): mounted filesystem with journalled data mode [ 138.784700] EXT4-fs (dm-1): Ignoring delalloc option - requested data journaling mode [ 138.784773] EXT4-fs (dm-1): mounted filesystem
Bug#592428: Fix 2.6.32 XEN guest on old buggy RHEL5/EC2 hypervisor(XSAVE)
On Aug 11, 2010, at 10:55, Jeremy Fitzhardinge wrote: On 08/11/2010 01:53 AM, Ian Campbell wrote: On Wed, 2010-08-11 at 03:31 +0100, Ben Hutchings wrote: On Mon, 2010-08-09 at 19:29 -0400, Kyle Moffett wrote: Would it be possible to apply the attached Fedora/Ubuntu kernel patch to Debian as well? The Fedora link is: http://cvs.fedoraproject.org/viewvc/F-13/kernel/fix_xen_guest_on_old_EC2.patch And the Ubuntu link: http://kernel.ubuntu.com/git?p=rtg/ubuntu-maverick.git;a=commit;h=1a30f99 As far as I can tell, no released version of Xen currently supports XSAVE, so this change is effectively a NOP on all newer hypervisors, but it allows functionality on older hypervisors (such as RHEL5, or when running on Amazon's EC2 service). [...] The comment says that 'There is only potential for guest performance loss on upstream Xen' which implies that XSAVE is supported now. I spent some time searching, and I can't find any reference to XSAVE support in upstream Xen. There are some email threads which discuss potential patches, but all the comments seem to indicate that all of the proposed methods for supporting XSAVE fail catastrophically during instance migration. Ian, what's your take on this? Is it worth trying to use XSAVE, and if so is there a way to detect the broken HV versions before doing so? The following commit seems to be in v2.6.31-rc1, is it not sufficient to allow correct operation on these older hypervisors? If not it would be nice to know why. The patch referred to by those two links says that old versions of Xen will simply kill the domain if they try to set CR4 bits the hypervisor doesn't understand, so this patch will not work. xen: mask XSAVE from cpuid Xen leaves XSAVE set in cpuid, but doesn't allow cr4.OSXSAVE to be set. This confuses the kernel and it ends up crashing on an xsetbv instruction. I directly tested the Debian 2.6.32-5-amd64 pvops kernel on the Amazon EC2 service (which uses one of the old buggy hypervisors). When I used the unmodified Debian kernel (which includes the xen: mask XSAVE from cpuid patch, my instance reboots before logging any output. When I use the same kernel patched with fix_xen_guest_on_old_EC2.patch, it correctly boots and runs. The kernel can take a noxsave on the command line which I imagine would also workaround the issue. Tried this too, it does not help. It would have made my life a lot easier if it did. If the hypervisor is old-but-not-too-old you may also have the option of masking the xsave bit in cpuid via the domain config file. Unfortunately many virtual hosting platforms don't give you the option of messing with the domain config file. :-( Thanks for all your help! Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/09cb225e-ee3b-4c2f-9251-ee043573e...@boeing.com
Bug#592428: Fix 2.6.32 XEN guest on old buggy RHEL5/EC2 hypervisor (XSAVE)
On Aug 09, 2010, at 19:29, Kyle Moffett wrote: In particular, I'm trying to write a script that packages up a vmlinuz and initrd.gz from the Debian-Installer to allow them to be easily run unmodified in an Amazon EC2 VM (now that Amazon supports using your own custom kernel). I can confirm that if I add this patch to the stock squeeze amd64 kernel (vers 2.6.32-19) and then dpkg-buildpackage -us -uc -B -j8, I get a vmlinuz binary which can be used to start up Debian-Installer on an Amazon EC2 instance using the default amd64 kernel module udebs. Cheers, Kyle Moffett -- To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/8ee40c98-7a70-42ed-be92-4131fc7f9...@boeing.com